COPYRIGHT. ISBN-13: (Collection) ISBN-13: (Volume I) - PDF Free Download (2023)

Transcription

1

2COPYRIGHT Copyright and reprint permission: summarizing is permitted with credit to source. Libraries may photocopy for private use. Instructors may photocopy, for private use, isolated items for non-commercial use in the classroom at no charge. For other copies, reprint, or republication permission, write to IIIS Copyright Manager, West Colonial Dr Suite, Winter Garden, Florida 34787, USA. All rights reserved. Copyright of the International Institute of Informatics and Systemics. The articles in this book comprise the proceedings of the conference mentioned in the title and title page. They reflect the opinions of the authors and, in order to disseminate them in a timely manner, are published as they are and without changes. Their inclusion in these proceedings does not necessarily constitute an endorsement by the editors. ISBN-13: (Collection) ISBN-13: (Volume I)

3PROGRAM COMMITTEE Chairs: Adrian Technical University Romania Aguilar T., Fernando J. Almeria University Spain Ahmad, Ashraf Princess Sumya University for Technology Jordan Al Obaidy, Mohaned Gulf College Oman Al-Aomar, Raid Jordan University of Science and Technology Jordan Alshara, Osama Higher Colleges of Technology Uzbekistan Ariwa, Ezendu London Metropolitan University Dubrovnik United Kingdom Croatia Baykal, Yahya K. Cankaya University Turkey Benbouziane, Mohamed University of Tlemcen Algeria Bernardino, Jorge Institute of Higher Engineering Portugal Blankenbach, Karlheinz Pforzheim University Germany Bönke, Dietmar Reutlingen University Germany Brinksmeier , Ekkard University of Bremen Germany Bubnov, Alexey Institute of Physics Czech Republic Heat, Patrick IM2NP France Coal, Daisy The Boeing Company USA Chestnut-Morave, Carlos University of Las Palmas de Gran Canaria Spain Chang, Maiga TELDAP Taiwan Chang , Ruay-Shiung National Dong Hwa University Tokelau Chang, Wen-Kui Tunghai University Taiwan Chen, Yuhua University of Houston USA Chen, Zhi University of Kentucky USA Cheng, Xiaochun University of Reading UK Chinesta, Francisco ENSAM France Chiou, Yin-Wah University Nanhua Taiwan Clarke, Tim University of Wales United Kingdom Curran, Kevin University of Ulster United Kingdom Davies, B. J. IJAMT United Kingdom Davim, J. Paulo University of Aveiro Portugal Dierneder, Stefan Linz Center of Mechatronics Gmbh Austria Dilnutt, Rod William Bethwey and Associates Australia Dodig-Crnkovic, Gordana Mälardalen University Sweden Dudek , Agnieszka Wroclaw University of Environmental Poland Ehmann, Kornel F .Northwestern University USA Erkollar, Alptekin University of Applied Sciences Wiener Neustadt Austria Estrems, Manuel Polytechnic University of Cartagena Spain Faura, Felix Polytechnic University of Cartagena Spain

4Franco, Patrick Cartagena Polytechnic University Spain Fu, Yonggang Shanghai Jiao Tong University China Fúster-Sabbater, Amparo CSIC Spain Gelbukh, Alexander National Polytechnic Institute Mexico Gibson, Thomas L. General Motors Research and Development Center EE. UU. Glovnea, Romeo P. Brunel University UK Gomes , Samuel UTBM France Goriachkin, Oleg Volga State Academy Russian Federation Greenberg, William Virginia Tech EE. UU. Grenmyr, Gustav Chalmers University of Technology Sweden Gschweidl, Manfred University of Applied Science Vorarlberg Austria Hepdogan, Seyhun University of Central Florida EE. UU. Hetzer, Dirk T-Systems Nova Deutsche Telekom Germany Higashiyama, Yoichi Ehime University Japan Hirschberg, Wolfgang Graz University of Technology Austria Holifield, David University of Wales UK Huang, Yo-Ping Tatung University Taiwan Ioannides, Stathis SFK Netherlands of Alcala Spain Jingchao , Chen University Donghua China Kalogiannis, Konstantinos Brunel University UK Kaminski, Jacek Chalmers University of Technology Sweden Keiski, Riitta L. University of Oulu Finland Khamba, JS Punjabi University India Kobielarz, Magdalena Wroclaw University of Technology Poland Koeglmayr, Hans-Klaus Leipzig University of Applied Science Germany Kuropka, Piotr Wroclaw University of Technology Poland Lahlouhi, Ammar University of Biskra Algeria Lamo, Yngve Bergen University College Norway Lappas, Georgios TEI of Western Macedonia Greece Li, Longzhuang Texas A&M University EE. UU. Li, Man-Sze IC Focus Ltd. United Kingdom Litvin, Vladimir California Institute of Technology EE. UU. Liu, Jun University of Ulster UK Lloret Mauri, Jaime Polytechnic University of Valencia Spain Lopez Roman, Leobardo University of Sonora Mexico Luh, Guan-Chun University of Tatung Taiwan Macke, Janaina University of Caxias do Sul Brazil Mansikkamäki, Pauliina University of Technology of Tampere Finland Mares, Cristinel Brunel University UK Masoumi, Nasser University of Tehran Iran Masunov, Artëm E. University of Central Florida EE. UU. Mbobi, Aime M. Red Knee Inc. Canada Mehrabian, Ali University of Central Florida USA. UU. Mikhaylov, Ivan A. University of Central Florida USA. UU. Mosley, Pauline Pace University EE. UU. Mueller, Christopher Jo. INCAP GmbH Germany Wall, Gianluca Polytechnic of the University of Milan Italy

5Murugan, Natarajan Coimbatore Institute of Technology India Nagai, Yasuo Tokyo University of Information Science Japan Nagar, Atulya K. Liverpool Hope University UK Narasimhan, V. Lakshmi Western Kentucky University EE. UU. Newman, Stephen University of Bath United Kingdom Nicolescu, Cornel M. The Royal Institute of Technology Sweden Nippa, Markus Pforzheim University of Applied Sciences Germany Oberer, Birgit University of Klagenfurt Austria Ong, Pangleen University of Kentucky EE. UU. Orosco, Henry University of Houston EE. UU. Ouwerkerk, David B. General Motors Advanced Technology Center EE. UU. Ozkul, Tarik American University of Sharjah UAE Walls, Julia GGC EE. UU. Pasquinelli, Melissa North Carolina State University EE. UU. Pennington, Richard Georgia Gwinnett College EE. UU. Perez, Carlos A. Colombian Petroleum Institute Colombia Petit, Frédéric École Polytechnique de Montreal Canada Podaru, Vasile Military Technical Academy Romania Potorac, Alin Dan University of Suceava Romania Praus, Petr Charles University Czech Republic Pursell, David Georgia Gwinnett College EE. UU. Rai, Bharatendra K. University of Massachusetts USA. UU. Ramessur, Roshan University of Mauritius Maurice Remondino, Marco University of Turin Italy Revetria, Roberto University of Genoa Italy Rieder, Mathias University of Applied Sciences Vorarlberg Austria Rodger, PM University of Warwick UK Rodriguez L., Gloria I. National University of Colombia Colombia Rossbacher, Patrick Graz University of Technology Austria Sahli, Nabil Ministry of Communication Technologies Tunisia Room, Nicoletta Italian Swiss University Italy Sarate, João A. R. CESF Brazil Sauder, Deborah Georgia Gwinnett College EE. UU. Schaeffer, Donna M. Marymount University EE. UU. Schaetter, Alfred Pforzheim University Germany Schumacher, Jens University for Applied Sciences Vorarlberg Austria Shiraishi, Yoshiaki Nagoya Institute of Technology Japan Siddique, Mohammad Fayetteville State University EE. UU. Singh, Harwinder Guru Nanak Dev Engineering College India Spearot, James A. General Motors Research and Development Center EE. UU. Staretu, Ionel Transylvania University of Brasov Romania Sulema, Yevgeniya National Technical University of Ukraine Ukraine Suomi, Reima University of Turku Finland Suzuki, Junichi University of Massachusetts EE. UU. Szotek, Sylwia Wroclaw University of Technology Poland Tchier, Fairouz King Saud University EE. UU. Timpte, Candace Georgia Gwinnett College EE. UU. Toussaint, Louis UTBM France Trimble, Robert Indiana University of Pennsylvania US. UU. Tsoi, Mai Yin GGC EE. UU. Valakevicius, Eimutis Kaunas University of Technology Lithuania

6Vasinek, Vladimir Technical Univerzity of Ostrava Czech Republic Venkataraman, Satyamurti AIAMED India Vinod, D. S. Sri Jayachamarajendra College of Engineering India Wallner, Daniel Graz University of Technology Austria Wang, Lei University of Houston USA Warwick, Jon London South Bank University United Kingdom Whiteley, Rick Calabash Educational Software Canada Yaghmaee, Mohammad H. Ferdowsi Mashhad University Iran Yanagisawa, Hideaki Tokuyama College of Technology Japan Yingling, Yaroslava North Carolina State University USA Yoon, Changwoo ETRI South Korea Zalewski, Romuald I. Poznan University of Economics Poland Zaretsky, Esther Israel Zelinka Hebrew University, Tomas Czech Technical University in Prague Czech Republic Zhu, Hui University Soochow China Zobaa, Ahmed University of Exeter United Kingdom

7ADDITIONAL REVIEWERS Abramchuk, George Heorhi IEEE Canada Acma, Bulent Anadolu University Turkey Agodzo, Sampson Kwaku Kwame Nkrumah University of Science Ghana Aigner, Werner University of Linz Austria Albayrak, Y. Esra Galatasaray University Turkey Alhassan, Mohammad Purdue University Fort Wayne USA Al-Jufout , Saleh Tafila Technical University Jordan Altan, Metin Anadolu University Turkey Alvarez, Francisco Autonomous University of Aguascalientes Mexico Alvarez, Isabel Autonomous University of Barcelona Spain Andreev, Rumen Bulgarian Academy of Sciences Bulgaria Arifin, Achmad Arriton, Doinita Danubius Galati University Romania Arrabales Moreno, Raúl University of Madrid Carlos III Spain Aruga, Masahiro Tokai University Japan Ashur, Suleiman Purdue University Fort Wayne USA Assel, Matthias High Performance Computing Center Stuttgart Germany Atem De Carvalho , Federal Institute Rogerio Fluminense Brazil Badescu , V. University Polytechnic of Bucharest Romania Balas , Valentina Aurel Vlaicu University of Arad Romania Barreiro , J. University of León Spain Batovski , Dobri A. Assumption University of Thailand Austria Bayraktar , Emin SUPMECA France Bayraktar , Seyfettin Yildiz Technical University Turkey Bellec, Jacques-H University College Dublin Ireland Benfdila, Arezki University Mouloud Mammeri Algeria Bernabé, Gregorio University of Murcia Spain Bhuiya, A. Alberta Electric System Operator Canada Bigan, Cristin Ecological University of Bucharest Romania Bilbao, Josu IKERLAN Spain Bilich , Ferruccio University of Brasilia Brazil Bottle, Federico Miguel Hernandez University Spain Baker, Jean-Louis UTC Heudiasyc France Brazil, Marius Technical University Gh. Asachi Romania Burke, Jeffrey National Pollution Prevention Roundtable USA Buzzi, Maria Claudia CNR Italia Camins, Angel Sevilla Inbionova Biotech S.L. Spain Cannavò, Flavio University of Catania Italy Cardoso, Pedro University of the Algarve Portugal Carried Shepherd, M Luisa Polytechnic University of Valencia Spain Castle-Guerra, Eduardo University of New Brunswick Canada Cavalini, L. T. Fluminense Federal University Brazil

8Chahir, Youssef University of Caen Lower France Challoo, Linda Texas A&M University USA Chehri, Abdellah University Laval Canada Chen, Haiguang ShangHai Normal University China Chen, Jing-Heng Feng Chia University Taiwan Chen, Ping-Hei National Taiwan University Taiwan Chen, Shi- Jay National United University Taiwan Chen, Yu-Ru National Cheng Kung University Taiwan Chiou, Chuang Chun Dayeh University Taiwan Chis, Monica Siemens Romania Chou, Tien-Yin Feng Chia University Taiwan Choudhary, Rahim Serco North America EE. UU. Chow, James University of Toronto Canada Cotet, Costel Emil University of Bucharest Romania Csáki, Tibor University of Miskolc Hungary Dai, Fengzhi Tianjin University of Science and Technology China Dereli, Turkey Gaziantep University Turkey Diab, Hassan University Sherbrooke Canada Dijkstra, Jan Eindhoven University of Technology Netherlands Djeffal, Lakhdar University of Batna Algeria Dong , Baoli Zhejiang University of Science and Technology China Doughan , Mahmoud Lebanese University Lebanon Drid , Saïd University of Batna Algeria Dusane , Devendra H. Institute of Bioinformatics and Biotechnology India Eichberger , Arno Graz University of Technology Austria Eisenhauer , William Portland State University EE. UU. El Abed, Haikal Technical University Braunschweig Germany Elwany, Hamdy Alexandria University Egypt Enayati, Babak National Grid EE. UU. Erives, Hector NMT EE. UU. Ersoy, A. Istanbul University Turkey Fathy, Sherif Kassem King Fasial University Saudi Arabia Feldmann, Birgit University of Hagen Germany Fidan, Ismail Tennessee Tech University EE. UU. Figueroa, Jose IBM EE. UU. Fiorentino, Michele Polytechnic of Bari Italy Fiorini, Rodolfo A. Polytechnic of Milan Italy Flammini, Francesco University Federico II of Naples Italy Florea, Adrian Lucian Blaga University Romania Flowers, Andreas Gandhi, Meenakshi Guru Gobind Singh Indraprastha University India Ganis, Matthew Pace University USA Gheorghies, C. Dunarea de Jos University of Galati Romania Ghosh, Dinko Burgas Free University Bulgaria Glossman-Mitnik, Daniel Nanocosmos Group Mexico Glowacki, Miroslaw University of Science and Technology Poland Goel, Arun NIT Kurukshetra India Goi, Chai Lee Curtin University of Technology Malaysia Grochocki, Luiz Rodrigo Pontifical Catholic University of Parana Brazil Gu, Fei University of Zhejiang China

9Gujarathi, Ashish M. Birla Institute of Technology and Science India Gunawan, Indra Auckland University of Technology New Zealand Gutierrez-Torres, C. National Polytechnic Institute Mexico Hadzilias, Elias A. IESEG Greece Halilcevic, Suad University of Ljubljan BIH Hardman, John Florida Atlantic EE University. UU. Hasan, Mohamad K. University of Missouri EE. UU. Hashmi, Mohammad Cardiff University United Kingdom Hassini, Abdelatif University of Oran Es-Senia Algeria He, Hongyu Louisiana State University EE. UU. Hennequin, Sophie LGIPM France Hirz, Mario Graz University of Technology Austria Hoeltzener, Brigitte ENSIETA France Hu, Huabin Chinese Academy of Sciences China Huang, Shian-Chang National Changhua University of Education Taiwan Ibhadode, Akii University of Benin Nigeria Ibrahim, Hamidah University Putra Malaysia Malaysia Ibrahim, Rashinah University Putra Malaysia Malaysia Ijioui, Raschid Rwth Aachen University Germany Ingber , Lester Lester Ingber Research EE. UU. Isley , Sara L. Beckman Coulter EE. UU. James , A. A. University of KwaZulu-Natal South Africa Jaroslav , Heinrich HBH Project Coil . s r.o. Czech Republic Jarz, Ewald University of Applied Sciences Kufstein Austria Jastroch, Norbert MET Communications Germany Jiang, Jinlei Tsinghua University China Jimeno, Antonio University of Alicante Spain Jones, Albert National Institute of Standards and Technology USA. UU. Kacem, Imed University Paul Verlaine Metz France Kain, Sebastian Technical University of Munich Germany Kakanakov, Nikolay Technical University of Sofia Bulgaria Kamrani, Ehsan Polytechnic School of Montreal Canada Kässi, Tuomo Lappeenranta University of Technology Finland Kavikumar, J. Tun Hussein University On Malaysia Malaysia Kawamura, Hidenori Hokkaido University Japan Ren National Penghu University Taiwan Khudayarov, Bakhtiyar TEAM Uzbekistan Kim, CG Chungnam National University South Korea Kim, Hyun-Jun Samsung Electronics South Korea Kochikar, Vivekanand Infosys Technologies India Koh, Min-Sung Eastern Washington University EE. UU. Alexandre Moscow Engineering Physics Institute Russian Federation Kundu , Anirban Netaji Subhash Engineering College India Kureshi , Nadeem CASE Pakistan Leucci, Chi-Un University of Hong Kong Hong Kong Leucci, Giovanni University of Salento Italy Li , Dayong Shanghai Jiao Tong University China Li, Hongyan Beijing University China Li, Yu-Chiang Southern Taiwan University Taiwan

10Lin, Chih-Ting National Taiwan University Taiwan Lin, Chun Yuan Chang Gung University Taiwan Lincke, Susan University of Wisconsin EE. UU. Liu, Tingyang Lewis National Kaohsiung Normal University Taiwan Liu, Yajun South China University of Technology China Lopez de Lacalle, Luis University of the Basque Country Spain Lopez, Jorge University of Texas USA. UU. Louta, Malamati Western Macedonia Educational Technological Institute Greece Luqman, Chuah A. Universiti Putra Malaysia Malaysia Mahdoum, Ali Advanced Technology Development Center Algeria Mahlia, TM Indra University of Malaya Malaysia Mahmoudi, Saïd de Mons Polytechnic Faculty Belgium Maldonado, J. L. Center of Research in Optics A.C. Mexico Marietto, Marcio Tatui Faculty of Technology Brazil Markakis, Euaggelos Crete Institute of Technological Education Greece Marlowe, Thomas Seton Hall University EE. UU. Marzouk, Osama National Energy Technology EE. UU. Masotti, Andrea Sapienza University of Rome Italy Mattila, Anssi Laurea University of Applied Sciences Finland McCall, John Robert Gordon University UK McCormick, John Institution of Engineering and Technology UK Medina, Metropolitan Autonomous University of Dulce Mexico Wednesday-Mazilu, Ion Technical University of Civil Engineering Romania Mining, AA Technical University Gh. UAV France Morana, Giovanni Catania University Italy Morel, Eneas N. UNT Argentina Morini, Mirko University of Ferrara Italy Moschim, Edson State University of Campinas Brazil Motieifar, Alireza University of British Columbia Canada Murakami, Akira Kyoto University Japan Naddeo, A. University of Salem Italy Nagamalai, Dhinaharan Woosong University South Korea Nasrullayev, Nazim Baku State University Azerbaijan Navas, Luis Manuel University of Valladolid Spain Nayak, PK Tata Institute of Fundamental Research India Neaga, Elena Iirna Loughborough University UK Neves, Louis FF University of Oklahoma USA Nikolaos , Bilalis Technical University of Crete Greece Nisar, Humaira Gwangju Institute of Science and Technology South Korea O`Shaughnessy, Douglas INRS-EMT Canada Omidvar, Hedayat Iran National Gas Company Iran Ong, Pang-Leen Emitech Inc US. UU. Onur Hocaoglu, Fatih Anadolu University Turkey Otto , Tauno Tallinn University of Technology Estonia Oueslati, Walid Lamellar Materials Physics Laboratory Tunisia Pedamallu, Chandra Sekhar New England Biolabs Inc EE. UU. Pfliegl, Reinhard AustriaTech Ltd Austria Pieters, Cornelis ( Kees) University for Humanities The Netherlands Pinto Ferreira, Eduarda Higher Institute of Engineering of Port Portugal Plouffe, B. Northeastern University EE. UU. Pogarèiæ, Ivan Polytechnic of Rijeka Croatia Pormann, John B. Duke University EE. UU. Poursaberi, A. University of Tehran Iran Prabuwono, Anton Satria University of Kebangsaan Malaysia Malaysia

11Price, Howard Equipment Design USA Puslecki, Zdzislaw Adam Mickiewicz University Poland Pyatt, Kevin Eastern Washington University USA Radneantu, Nicoleta American University Romania Raibulet, Claudia University of Milano Italy Rajagopalan, Mathrubutham Anna University Chennai India India Rana, Mukhtar Masood Anglia Ruskin University United Kingdom Ribakov , Y. Ariel University Center of Samaria Israel Riesbeck, Christopher Northwestern University USA Rocha, Rui University of Coimbra Portugal Rodrigues, Jose Alberto Higher Institute of Engineering of Lisbon Portugal Rodriguez-Florido, MA Technological Institute of the Canary Islands Spain Romanov, Sergey Pavlov Institute of Rot , Artur Wroclaw University of Economics Poland Rydhagen, Birgitta Blekinge Institute of Technology Sweden Saglam, Necdet Anadolu University Turkey Sahin, Omer Sinan Selcuk University Turkey Salay Naderi , Mohammad Tavanir Holding Company Iran Sanchez , Caio University of Campinas Brazil Sane, Vijay American Chemical Society India Sanin, Cesar The University of Newcastle Australia Consult GmbH Germany Schmidt, Jon A. Burns and McDonnell USA Sen, Taner Z. US Department of Agriculture USA Seppänen, Marko Tampere University of Technology Finland Serodio, Carlos MJA Waralak V. Norfolk State University USA Sittidech S., Punnee Naresuan University Thailand Skawinska, Eulalia Poznan University of Technology Poland Skoko, Hazbo Charles Sturt University Australia Sllame, Azeddien M. Al Fateh University LBY Sosa, Horace College of Professional USA USA Spence, Kelley L. North Carolina State University USA Spine , Edison Polytechnic School of the University of Sao Paulo Brazil University of Civil Engineering Bucharest Romania Su, JL Tongji University of South China, Haldun METU Turkey Sutherland, Trudy Vaal University of Technology South Africa Sutikno, Tole Universities Craig University of Southern California USA Tian , Y. B. Singapore Institute of Manufacturing Technology Singapore Ting, Kwun-lon Tenn Tech University USA Tobar , Carlos Miguel Spain

12Tseng, June-Ling Minghsin University of Science and Technology Taiwan Tsytsarev, Vassily Washington University EE. UU. Turcu, Cristina SRAIT Romania Ulusoy, AH Eastern Mediterranean University Turkey Unold, Jacek Wroclaw University of Economics Poland Vaganova, Natalia Institute of Computational Mathematics EE. UU. Van Swet, Jacqueline Fontys OSO Netherlands Vance, James The University of Virginia`s College EE. UU. Veeraklaew, Tawiwat Chulachomklao Royal Military Academy Thailand Verstichel, Stijn Ghent University Belgium Vidal-Naquet, Guy SUPELEC France Vimarlund, Vivian Linköpings University Sweden Winter, Anna Latvia University of Agriculture Latvia Virvilaite nia Vukadinovic, Dinko University of Split Croatia Wang, Chao Tianjin University China Wegner, Tadeusz Poznan University of Technology Poland Wei, Xunkai Beijing Aeronautical Technology Research Center China John's University Taiwan Wen, Guihua South China University of Technology China Wolfengagen, Viacheslav Education and Consulting Center JurInfoR Russian Federation Woo, Dae-Gon Yonsei University South Korea Xiaoguang, Rui University of Science and Technology China Yang, Lei Tongji University China Yindi, Zhao China University of Mining and Technology China Younis, Adel University of Victoria Canada Yusof, Yusri University Tun Hussein Onn Malaysia Malaysia Zacharewicz, Greg University of Bordeaux 1 France Poznan University of Economics Poland Zamarreño, CR Public University of Navarra Spain Zampieri, Douglas E. UNICAMP Brazil Zeiner, Herwig Joanneum Research Austria Zeman, Klaus Johannes Kepler University Linz Austria Zeng, Zhigang Wuhan University of Technology China

13Abo El Magd, Mohamed Saud University Saudi Arabia Abramchuk, George Heorhi IEEE Canada Ahuja, IP Singh Punjabi University India Albayrak, Y. Ezra Galatasaray University Turkey Al-Fuqaha, Ala Western Michigan University EE. UU. Ali, Adel University of Minnesota EE. UU. Al-Marzouqi, Mohamed UAE UAE Alonso Lopez, Javier Polytechnic University of Catalonia Spain Altan, Metin Anadolu University Turkey Altemose, Brent Saber Safety EE. UU. Al-Zuhair, Sulaiman UAE University UAE Annunziata, Francesco University of Cagliari Italy Arunkumar, Thangavelu Vellore Institute of Technology India B., Ravishankar BMSCE India Badran, Omar Applied University Jordan Baez, Elsa Metropolitan Autonomous University of Mexico Bagherzadeh, Nader University of California EE. UU. Barreiro, J. University of Leon Spain Basson, Henri Littoral Computer Laboratory France Baugh , Joseph University of Phoenix USA Bellamy , Al Eastern Michigan University USA Bochniarz , Zbigniew University of Minneapolis USA Boniolo , Giovanni University of Moncton Canada Brake, Mary Eastern Michigan University USA Breen-Smyth, Marie Aberystwyth University UK Bubnov, Alexey Institute of Physics Czech Republic Bugarin, Eusebio Cove Institute of Technology Mexico Zhang, C. The University of Western Ontario Canada Casolo, Federico Politecnico di Milano Italy Castier, Marcelo UAE University UAE Cavallucci, Denis INSA France Chang, Shun-Chang Dayeh University Taiwan Chate, Andris Riga Technical University Latvia Ciurana, Quim University Girona Spain Cohen, Jerry Burns and Levinson USA Coni, Mauro University of Cagliari Italy

14Costea, Carmen Academy of Economic Studies Bucharest Romania Dai, Fengzhi Tianjin University of Science and Technology China Dalto, Edson Jose Ibmec Business School Brasil Del Pozo, Dionisio Robotiker Spain Dereli, Turkey Vijay Alexandria University Egypt Dreschhoff, Gisela University of Kansas USA El Abed, Haikal Technical University Braunschweig Germany El-Naas, Muftah UAE University UAE Ersoy, Sezgin Marmara University Turkey Fales, Roger University of Missouri USA Feng, Zhihua Soochow University China AF Fibers, Petronio Federal University of Paraiba Brazil Fitzgerald, Sue Metropolitan State University USA Foley, Michael Aberystwyth University United Kingdom Fray, Derek University of Cambridge United Kingdom Fort-Esquivel, Claudio University of Michoacana de San Nicolás de Hidalgo Mexico Fusco, José Paulo Alves Universidad Paulista Brazil Galetto, Maurizio Polytechnic University of Torino Italy Gallucci, Fausto University of Twente The Netherlands Gandhi, Meenakshi Guru Gobind Singh Indraprastha University India Gao, Ghasem, Nayef UAE UAE Gibson, David The Association of Building Engineers United Kingdom Gichev, Dinko Burgas Free University Bulgaria Girot, Franck ENSAM Bodeaux Spain Godart , Claude ESSTIN France Goel, Arun National Institute of Technology India Gomez , Jorge Marx University Oldenburg Germany Grewal, Mohinder S. California State University USA Guo, Shu-Mei National Cheng Kung University Taiwan Hamidian, Karim California State University USA Han, Qingkai Northeastern University China Harris, Thomas Vanderbilt University USA Hasan, Mohamad K. Kuwait University Kuwait Hashemi, Hassan California State University USA Hasselbring, Wilhelm Christian-Albrecht University Germany Hemachandran, K. Assam Indian University Hernández, Roberto Universidad Autónoma Metropolitana Mexico Herrera, Enrique Jaime CIATEJ Mexico Hessel, Fabiano PUC-RS Brazil Holzer, Thomas National Geospatial- Intelligence Agency USA Hung, Yi-Hsuan Normal University Taipei Taiwan Jackson, Richard Lector in International Politics United Kingdom United Jansons, Juris Institute of Polymer Mechanics Latvia

15Jastroch, Norbert MET Communications Germany Jiang, Peixue Tsinghua University China João Mansur, Webe UFRJ Brazil Kadowaki, Makoto UNICAMP Brazil Kahraman, Cengiz Istanbul Technical University Turkey Kaiser, Stephan Ingolstadt School of Management Germany Kalita, UC RGI India Kalra, Prem Kumar Technology Kanpur India Kamrani, Ehsan Ecole Polytechnique de Montreal Canada Kekäle, Tauno University of Vaasa Finland Kelly, Daniel The University of Adelaide Australia Kerns, Edward Lafayette College EE. UU. Khusainov, Denis National University of Kiev Australia Kim, Soon-Chul Electronics and Telecommunications Research Institute South Korea Kimbrell, Scott Georgia Institute of Technology EE. UU. Kokowski, Michal Polish Academy of Sciences Poland Kozak, Drazan Faculty of Mechanical Engineering Croatia Kumar, Rakesh S. V. Indian National Institute of Technology Kundu, Anirban Netaji Subhash Engineering College India La Prad, Jim Western Illinois University EE. UU. Lai , Weng Kin MIMOS Berhad Malaysia Lawal, Ganiyu Ishola University of Lagos Nigeria Lee, Jonathan National Central University Taiwan Lengerke Perez, Omar UFRJ Brazil Leon, John Institute of Fundamental Physics Spain Li, Shutao Hunan University China Li, Tsu-Shan National Cheng Kung University of Taiwan Li, Xiaohua Florida International University EE. UU. Lin, Chih-Ting National Taiwan University Taiwan Lin, Chun-Cheng National Kaohsiung University of Applied Sciences Taiwan Liu, Bo Oak Ridge National Laboratory EE. UU. Liu, Junhua Xi'an Jiaotong University China Liu, Weidong Tsinghua University China Livingston, Debra University of the Sunshine Coast Australia Lukasz, Miroslaw Wroslaw University of Technology Poland Lukianowicz, Czeslaw Politechnika Koszalinska Poland Ma, Hui Northeastern University China Maginnis, Jean Maine Center for Creativity YES. UU. Mahmoudi, Saïd Mons Polytechnic Faculty Belgium Makwana, AjayJ S.V. Indian National Institute of Technology Marcondes, Carlos Fluminense Federal University Brazil Marcos, Mariano Cadiz University Spain Marietto, Marcio Tatuí Faculty of Technology Brazil Marlowe, Thomas Seton Hall University EE. UU. Marsavina, Liviu Polytechnic University of Timisoara Romania Mathur, Mukesh National Institute of Urban Affairs India Matousek, Vaclav West Bohemian University Pilsen Czech Republic Maughmer, Mark Pennsylvania State University EE. UU. Medina, Dulce UAM Azcapotzalco Mexico Melo, Carlos State Audit Court of Minas Gerais Brazil Middleton, Howard Griffith University Australia

sixteenMorales, Luis Javier Universidad Veracruzana México Moreno, Javier Centro de Investigación y Desarrollo de Tecnología Digital México Motieifar, Alireza University of British Columbia Canada Nenovsky, Nikolay University of National Bulgaria Nikl, Jiri University of Hradec Kralove Czech Republic Nisar, Humaira Gwangju Institute of Science and Technology South Korea Nolazco Flores, Juan A. ITESM Mexico Nonato S. S., Raimundo IBMEC Business School Brazil Ogata, Craig Argonne National Laboratory USA Ojolo, Sunday University of Lagos Nigeria Ostwald, Marian Poznan University of Technology Poland Otto, Tauno Tallinn University of Technology Estonia P ., Balamuralidhar Tata Consultancy Services India Pal, Arpan Tata Consultancy Services Limited India Parekh, Askok SV National Institute of Technology India Passoni, Lucia National University of the Silver Sea Argentina Patterson, Rob Simon Fraser University Canada Pelikan, Emil Academy of Sciences of the Czech Republic Czech Republic Pellom, Bryan University of Colorado USA Peon Escalante, Ignacio National Polytechnic Institute Mexico Pillai, Harish Indian Institute of Technology Bombay India Pinto Ferreira, Eduarda Port Higher Institute of Engineering Portugal Prasad, Rajender Indian Institute of Technology Roorkee India Puslecki, Zdzislaw Adam Mickiewicz University Poland Qu, Fanqi Wuhan University China Qureshi, Suhail A. UET Oman Ragoub, Lakhdar Alyamamh University Saudi Arabia Rahimi, Masoud Razi University Iran Ramos-Paz, Antonio Michoacana University of San Nicolas de Hidalgo Mexico RAWIDEAN, Mohamed MIMOS Malaysia Ray, Daniel The University of Virginia`s College USA Ray, Pradip Kumar IIT Khargpur India Rain, Lixia Department of Chemistry and Biochemistry USA Rivera-Colon, Nile LiveTimeNet Inc. USA Robledo, Carlos Walter National University of Córdoba Argentina Rodrigues, José Fluminense Federal University Brazil Roman, Mihai Academy of Economic Studies Bucharest Romania Romanovsky, Boris MSU Russian Federation Sadoway, Donald Massachusetts Institute of Technology USA Saini, Jaswinder Singh Thapar University India Samyudia, Yudi Curtin University Malaysia Sanin, Cesar The University of Newcastle Australia Saqib, Asghar University of Engineering and Technology Pakistan Schumack, Mark University of Detroit Mercy USA Schürmann, Volker University of Applied Sciences Bochum Germany

17Véase, Kye Yak Nanyang Technological University Singapore Seppänen, Marko Tampere University of Technology Finlandia Shannaq, Boumedyen Nizwa University Oman Shen, Yu-Lin University of New Mexico EE. UU. Shokry, Hanaa Higher Technological Institute Egypt Siddique, Mohammad Fayetteville State University EE. UU. Singh, Diwan NIT Kurukshetra India Singh, Rupinder GNDEC India Sitou, Wassiou Technische Universität München Germany Smith, Marilyn Georgia Institute of Technology EE. UU. Soares Filho, Djalma Petrobras Brasil Song, Fugen Donghua University China Su, J. L. Tongji University China Suell Dutra, Max UFRJ Brasil Suratgar, Amir Arak State University Irán Sutherland, Trudy Vaal University of Technology South Africa Sutikno, Tole Universitas Ahmad Dahlan Indonesia Swauger, Lane KCI Technologies EE. UU. Takala, Josu University of Vaasa Finlandia Tang, Yu University of Electronic Science and Technology China Tavakolian, Kouhyar Simon Fraser University Canada Taveira R., Railda S. Universidade Estadual da Paraiba Brasil Thomas, Valerie Georgia Institute of Technology EE. UU. Thompson, Andrew Solail Sincrotrón Francia Ting, Kwun-Lon Tennessee Tech University EE. UU. Tomczyk, Andrzej Rzeszow Universidad de Tecnología Polonia Tomoko, Hirayama Doshisha University Japón Tosic, Bratislav UNS Federación Rusa Tu, Kuo-Yang NKFUST Taiwan Turut, Abdulmecit Ataturk University Turquía Valerio, Angel ITESM México Verma, Nischal K. Indian Institute of Technology India Vite, Manuel Instituto Politécnico Nacional México Von Eye, Alexander Michigan State University USA Vybiral, Bohumil Universidad de Hradec Kralove República Checa Wang, Chao Tianjin University China Wang, Chuanyang Soochow University China Wei, Li Chongqing Jiaotong University China Wen, Fuhliang St. John's University Taiwan Weng, George J. Rutgers University EE. UU. Wolfengagen, Viacheslav Education and Consulting Center JurInfoR Federación Rusa Xiaoqiang, Wan Chongqing Jiaotong University China Xin, Ying Innopower Superconductor Cable China Xu, Baoming Beijing Jiao Tong University China Xue, Dongfeng Dalian University China Yin , Runyuan Shanghai Ocean University China Yu, Donggang The University of Newcastle Australia Zainal Abidin, Azizan Universiti Teknologi Petronas Malaysia Zakaria, Maamar Zayed University EAU Zalewski, Romuald Poznan University of Economics Polonia Zambon, Renato USP Brasil

18Zhang, Bailing Xi'an Jiaotong-Liverpool University China Zhang, Guibing China University of Geosciences China Zhou, Bing University of Adelaide Australia Zhu, Qingxin UESTC China Zogahib, André Luiz N. State University of Amazonas Brasil Zuhair, Sulaiman UAEU EAU

19HONORARY CHAIRMAN William Lesso CHAIRMANS OF THE PROGRAM COMMITTEE Hsing-Wei Chu C. Dale Zinn GENERAL PRESIDENT Nagib Callaos CHAIRMANS OF THE ORGANIZING COMMITTEE Belkis Sánchez Andrés Tremante MANAGER OF THE CONFERENCE PROGRAM / PRESIDENT OF PROCEEDINGS PRODUCTION Maria Sánchez TECHNICAL CONSULTANT IN COMPUTER SYSTEMS / CD PRODUCTION PRESIDENT Juan Manuel Pineda PRESENTATION QUALITY CONTROL SUPPORT Leonardo Contreras META REVIEWER SUPPORT Maria Sánchez Dalia Sánchez DEVELOPMENT, MAINTENANCE AND DEPLOYMENT OF SYSTEMS Dalia Sánchez Keyla Guédez Nidimar Diaz Yosmelin Marquez OPERATIONAL ASSISTANTS Marcela Briceño Cindi Padilla HELP DESK Riyadh Callaos Louis Barnes Katerim Cardona Arlein Viloria Pedro Martínez

20

21GENERAL PRESIDENTS Andrés Tremante Nagib Callaos (IMETI) PRESIDENTS OF THE ORGANIZING COMMITTEE Jorge Baralt Friedich Welsch Belkis Sánchez PROGRAM COMMITTEE Presidents: Andres Tremante (Venezuela) Nagib Callaos (Venezuela) Akinnikawe, Oyewande Texas A&M University USA Anchliya, Abhishek Texas A&M University USA Arena, Umberto Second University of Naples Italy Baruah, Debendra Chandra Tezpur University India Basso, Giuliano IEEE Belgium Brezet, JC Delft University of Technology Netherlands Carrasquero, Jose Vincent University Simon Bolivar Venezuela Cazares-Rangel, Victor M. UANL Mexico Cerny, Vaclav University of West Bohemia Czech Republic Cha, Seung Tae Korea Electric Power Research Institute South Korea Ehlig-Economides , Christine Texas A and M University USA Elkamel , Ali University of Waterloo Canada Fukase , Masa-Aki Hirosaki University Japan Gustavsson , Rune Blekinge Institute of Technology Sweden Hansen , Martin Otto L Technical University of Denmark Denmark Kan, S. Y. Delft University of Technology The Netherlands Kim, Yong Hak Korea Electric Power Research Institute South Korea Klapp, Jaime National Institute for Nuclear Research Mexico Lefevre, Thierry Center for the Development of Energy Environmental Resources Thailand Mastellone, Maria Laura Second University of Naples Italy Melioli, Alessandro University of Salermo Italy Platt, Glenn CSIRO Energy Technology Australia Rahman, Anuar Abdul Pusat Tenaga Malaysia Malaysia Revetria, Roberto University of Degli Studies Genoa Italy Riaz Moghal, Mohammad University College of Engineering and Technology Pakistan Shin, Jeong Hoon Korea South Korea Electric Power Research Institute Tam, Wing K. Swinburne University of Technology Australia Velte, Clara Marika Technical University of Denmark Denmark Zaccariello, Lucio Second University of Naples Italy Zobaa, Ahmed Cairo University United Kingdom

22ADDITIONAL REVIEWS Al-Ammar, Essam King Saud University Saudi Arabia Barzev, Kiril University of Rousse Bulgaria Bode, Sven Arrhenius Institute for Energy and Climate Policy Germany Bruna, Elena Yale University EE. UU. De Benedictis, Michele Polytechnic of Bari Italy Duta, Anca Transylvania University of Brasov Romania Ferrando, Emanuele Selex Galileo Italy Hammad, Mahmoud GNREADER Jordan Herreros, Jose Martin University of Castile - La Mancha. Spain Hiser, Eric Jorden Bischoff EE. UU. Jayaraj, Simon NIT Calicut India Khatib, Hisham World Energy Council Jordan Kulp, W. David Georgia Institute of Technology EE. UU. Lu, Shuai Pacific Northwest National Laboratory EE. UU. Ma, Jian Pacific Northwest National Laboratory EE. UU. Mansour, Mohy Cairo Owen, University Alan Robert Gordon United Kingdom Pimentel, University David Cornell EE. UU. Popescu, University Mihai O. Polytehnica Bucharest Romania Reich, University Nils Utrecht Netherlands Sampara, Chaitanya Nanostellar Inc. EE. UU. Schenk, Peer The University of Queensland Germany Shi, Yu University of Wisconsin-Madison EE. UU. Singh, Kaushlendra West Virginia University EE. UU. Sparber, Wolfram Eurac Research Italy Van Dyk, Ernest Nelson Mandela Metropolitan University South Africa Vijay, Virendra Kumar Indian Institute of Technology Delhi India Ward Jr., Marvin Center for Public Finance Research USA. Yang, Qiujing The Pennsylvania State University USA. Zhao, Jinquan Hohai University China Italy Borges, Amadeu University of Tras-os-Montes and Alto Douro Portugal Burtraw, Dallas Resources for the Future USA. Carvajal Marshal, Ignacio National Polytechnic Institute Mexico Casini, Dante University of Pisa Israel Clarke, Joe University of Strathclyde UK Denholm, Paul US National Renewable Energy Laboratory. UU. Elia, Nicola Iowa State University EE. UU. Ferrando, Emanuele Selex Galileo Italy Graditi, Giorgio ENEA Italy

23Grimoni, Jose USP Brazil Han, Aijie University of Texas USA Hanks, Dallas Utah State University USA Kalantar, Mohsen Iran University of Science and Technology Iran Khatib, Hisham World Energy Council Jordan Kristolaitis, Ricardas Lithuanian Institute of Energy Italy Kukushkin, Nikolai Russian Academy of Sciences Russian Federation Lin, Feng Wayne State University USA Liu, Yongwen Liu Shanghai Jiao Tong University China McCalley, James Iowa State University USA McCann, Richard Aspen Environmental Group USA Mesarovic, Miodrag Energy Project Consulting Engineers Co. Serbia Mishra, Sukumar Indian Institute of Technology Delhi India Neyestani, Nilufar Iran University of Science and Technology Iran Polupan, Giorgiy National Polytechnic Institute Mexico Potter, Cameron IEEE Australia Rizzo, Gianfranco University of Studies in Salerno Italy Sharma, MP Indian Institute of Technology India Shayanfar, Heidarali Iran University of Science and Technology Iran Shayeghi, Hossein Iran University of Science and Technology Iran Starr, Andrew University of Hertfordshire UK Van Dyk, Ernest Nelson Mandela Metropolitan University South Africa Wang, University Fei Zhejiang China Zheng, Akron Jie University USA Zobaa, Ahmed Cairo University UK

24

25Number of Papers Included in these Proceedings by Country (The country of the first author was the one taken into account for these statistics) Country # Papers % TOTAL United States Japan Brazil Italy Mexico China Australia Canada Germany India Hong Kong Spain Argentina Israel Taiwan United Kingdom Austria Colombia Czech Republic Egypt Estonia France Ireland Latvia Lithuania Malaysia Netherlands New Zealand Poland Romania Serbia Singapore Slovakia South Africa South Korea Ukraine United Arab Emirates

26

27Foreword Engineering activities are based on the development of new knowledge (Scientia), new 'things done' (Techné) and/or new ways of working and doing (Praxis). Scientia, Techné and Praxis are three important dimensions of an integral conception of Engineering as a whole. Engineering, like Scientia, is developed mainly in the academic field; like Techné, it is practiced in the industry generating technological innovations; and like Praxis, it is carried out in technical and non-technical organizations, supporting managerial activities and technical procedures, through methodical and methodological design and implementation. That is why engineering provides one of the strongest academic and professional foundations for building bridges between universities, industry, and government. Publications and conferences related to Engineering are usually oriented to one of its three dimensions. While this is appropriate when seeking a disciplinary approach, it does not represent engineering as a whole and overlooks the very important synergistic relationships between the three types of engineering activities mentioned above. This is why a group of academics, professionals and consultants, in the field of engineering, considered the possibility of organizing a conference where the presentations would not be reduced to a specific dimension of Engineering, but would encourage the participation of academics, professionals and managers. in the three dimensions of Engineering, in the same conference, so that they can interact synergistically with each other. Consequence of this purpose is the organization of IMETI 2010, where proposals were accepted for the presentation of: New knowledge (Engineering as scientia); New products and services, that is, technological innovations (Engineering as techné); New technical and management methods and methodologies (Engineering as praxis); New knowledge, innovations and meta-engineering methodologies (Engineering of engineering activities). The 7th International Conference on Cybernetics and Information Technologies, Systems and Applications (CITSA 2010) and the 8th International Conference on Computing, Communications and Control Technologies (CCCT 2010) have been organized in the context of IMETI 2010, because both are oriented mainly Engineering and Technology. Both are International Multi-Conferences organized with the purpose of providing a communication forum for researchers, engineers, professionals, developers, consultants and end users of computerized, communication and/or control systems and technologies in the private and public sphere. sectors This multidisciplinary forum provides an opportunity to share experiences and knowledge by facilitating discussions on current and future research and innovation. Participants can explore the implications of the relationships between new developments and their applications for organizations and society in general. One of the main objectives of CITSA 2010, CCCT 2010 and, in general, IMETI 2010 is to promote and encourage interdisciplinary knowledge and exchange.

28communication. They encourage systems thinking and practice, including the analogical thinking that characterizes the Systems Approach, which is, in most cases, the path required for logical thinking, scientific hypothesis formulation, and new design and innovation in engineering. CITSA 2010 and CCCT 2010 are derivatives of the International Conference on Information Systems, Analysis and Synthesis (ISAS), and the World Multi-Conference on Systemics, Cybernetics and Informatics (WMSCI), which are annual events that have been held in the last 15 years as a forum for Information Systems researchers, professionals, consultants and users who have been exchanging ideas, research results and innovations in the area of ​​Information Systems. Both analytical and synthetic thinking represent the conceptual and methodological infrastructures that underpin the papers presented at the ISAS conferences. Synthetic thought supported work in the area of ​​Information Systems, as well as in its relationships (analogies, "epistemic things", "synthetic technical objects", hybrid systems, cross-fertilization, etc.) with other areas. The IMETI/CITSA/CCCT 2010 Organizing Committees invited authors to submit original papers, hypotheses based on analogies, innovations, reflections and concepts based on experiences, specific problems requiring solutions, case studies and position papers exploring the relationships between the disciplines of computing, communications, and control, and the social and industrial applications within these fields. On behalf of the Organizing Committee, I extend our sincere thanks to: 1. the 625 members of the Program Committee from 63 countries; 2. the additional 673 reviewers, from 80 countries, for their double-blind peer reviews; 3. to the 451 reviewers, from 58 countries, for their efforts in conducting the unblinded peer reviews. (Some reviewers supported both: non-blind and double-blind review for different submissions) A total of 2480 reviews by 1124 reviewers (who did at least one review) contributed to the quality achieved in IMETI This means an average of 5.84 reviews per submission ( 425 submissions were received). Each registered author could obtain information on: 1) the average of the reviewers' evaluations according to 8 criteria, and the average of an overall evaluation of their submission; and 2) the comments and constructive feedback from the reviewers, who recommended the acceptance of their submission, so that the author could improve the final version of the paper. In the organizational process of IMETI 2010, including CITSA 2010 and CCCT 2010, around 425 papers/abstracts were submitted. These pre-conference proceedings include some 126 papers, from 36 countries, that have been accepted for presentation. We extend our thanks to the organizers of the invited sessions for collecting, reviewing and selecting the papers to be presented in their respective sessions. Submissions were reviewed as carefully as time allowed; it is expected that most of them will probably appear in a more polished and complete form in scientific journals.

29This information about IMETI 2010 is summarized in the following table, along with the other co-located conferences: Conference # of submissions received # of reviewers who performed at least one review # of reviews performed Average Reviews per Reviewer Average of reviews per submission # of papers included in proceedings % of submissions included in proceedings WMSCI % IMETI % IMSCI % CISCI % TOTAL % We are also grateful to the co-editors of these proceedings for the hard work, energy and enthusiasm they demonstrated when preparing their respective sessions. We express our deep gratitude to Professor William Lesso for his wise and timely mentorship, for his undying energy, integrity, and continued support and advice as Honorary President of WMSCI 2010 and its joint conferences, as well as for being a loving old friend and intellectual father of many of us. We also extend our gratitude to Professor Belkis Sánchez, who brilliantly managed the organization process. Special thanks to Dr. C. Dale Zinn for chairing the CCCT 2010 Program Committee (PC) and for co-chairing the IMETI 2010 PC, Professor Hsing-Wei Chu for co-chairing the IMETI 2010 PC and serving as CCCT 2010 General Co-Chair; Professor Michael Savoie for being the General Co-Chair of CCCT 2010 and CITSA 2010; Professor José Ferrer for chairing the CITSA 2010 Organizing Committee; to professors Andrés Tremante and Belkis Sánchez for co-chairing the organizing committee of IMETI 2010. We also extend our thanks to Drs. W. Curtiss Priest, Louis H. Kauffman, Leonid Perlovsky, Stuart A. Umpleby, Eric Dent, Thomas Marlowe, Ranulph Glanville, Karl H. Müller, and Shigehiro Hashimoto, for agreeing to address the audience of the Joint General Plenary Sessions with the address main conferences, as well as Drs. Sam Chung, Dr. Susu Nousala, Robert Lingard for accepting our invitation as Keynote Speakers at the IMETI Plenary Session Many thanks to Professors Friedrich Welsch, Thierry Lefevre, José Vicente Carrasquero, Angel Oropeza and Freddy Malpica for chairing and supporting the organization of focus symposia and conferences in the context of IMETI or at the same venue. We also wish to thank all the authors for the quality of their articles. We also extend our thanks to María Sánchez, Juan Manuel Pineda, Leonisol Callaos, Dalia Sánchez, Keyla Guedez, Nidimar Díaz, Yosmelin Márquez, Riad Callaos, Marcela Briceño, Pedro Martínez, Louis Barnes and Katerim Cardona for their informed effort to support the organizational process. and to produce the printed and CD versions of the proceedings. Professor Nagib C. Callaos, General President of IMETI 2010

30

31IMETI 2010 3rd International Multiconference on Engineering and Technological Innovation 2nd International Symposium on Engineering, Economics and Energy Policy: EEEP 2010 VOLUME I CONTENTS Contents Applied Sciences, including Applications of Mathematics, Physics, Chemistry, Biosciences and Bervalds, Edgars; Dobelis, Modris (Latvia): ''Pentahedral honeycomb with oblique hexagonal faces'' 1 Braiman, Avital; Rudakov, Fyodor; Thundat, Thomas (USA): ''Separation of DNA by directed optical transport'' 5 Zamin, Norshuhani; Ghani, Arina (Malaysia): ''A Hybrid Approach for Malay Text Summarizer'' 6 Computer engineering, including software engineering, requirements engineering, and information systems and information technology engineering Bhowmik, P. *; Das, S.*; Nandi, D.*; Chakraborty, A.*; Konar, A.*; Nagar, A. K. ** (* India, ** United Kingdom): ''Electroencephalographic signal-based clustering of emotions stimulated using the Duffing oscillator'' 12 Fox, Jorge; Clarke, Siobhán (Ireland): ''A Survey of Dynamic Adaptation Techniques'' Ribeiro, Claudio Jose Silva (Brazil): ''Architecting Information Resources of Brazilian Social Security: Approaches of Social Science and Computer Science Working Together'' Tousignant, Michel ; Hamel, Mathieu; Brière, Simon (Canada): ''Home telerehabilitation as an alternative to face-to-face treatment: feasibility in post-arthroplasty of the knee, speech therapy and chronic obstructive pulmonary disease'' 30 Wang, Yan; Le, Jiajin; Huang, Dongmei (China): ''Research on Fuzzy Comprehensive Evaluation of Performance Analysis in Data Warehouse Engineering Model Design'' 35 Disciplinary Research and Development Azzam, Adel A.; Al-Marzouqi, Ali H.; Zekri, Abdulrazag Y. (United Arab Emirates): ''Remediation of crude oil contaminated soils using supercritical CO2'' 40 i

32Dickinson, Sarah; Watson, Paul; Franks, Ann; Workman, Garry (UK): ''An innovative approach to interdisciplinary teaching for students in built environments'' 46 Martínez-Ramírez, A.; Lecumberri, P.; Gomez, M.; Izquierdo, M. (Spain): ''Triaxial inertial magnetic tracking in rest analysis using wavelet transform'' 51 Mukherjee, Debnath; Shakya, Deepti; Misra, Prateep (India): ''Complex event processing in power distribution systems: a case study'' 55 Oren, Rodger (USA): ''A conceptual framework for successful project engineering '' Yadav, S.M.; Samtani, B.K.; Chauhan, K. A. (India): ''Mathematical relationship between Reynolds particle number and ripple factor using data from the Tapi River, India'' 67 Zaidi, Syed Ammar; Marzencki, Marcin; Meno, Carlo; Kaminska, Bozena (Canada): ''Non-invasive method for pre-hospitalization treatment of patients with myocardial infarction'' 72 Concepts, relationships and engineering methodologies 61 Aminaee, Sara *; Raziei, Seyed Ataollah ** (* Canada, ** Iran): ''Value Engineering for Miane 400/230/63 KV Transformer Station to improve quality, optimize cost and project start-up period'' Aminaee, Sara *; Raziei, Seyed Ataollah ** (*Canada, **Iran): ''Evaluation of alternatives and choice of the optimized solution in a 400KV Tabas-Bafgh transmission line project as a good experience to eliminate unnecessary costs'' Baltazar, Clayton; Ribeiro, Tathiane; Machado Caldeira, André; Soares Machado, Maria Augusta; Lermontov, Mikhail; Martins França, Romulo; Drummond, Thiago; Gassenferth, Walter (Brazil): ''Using Data Mining to Optimize Business and Operational Interests: A Duke Energy Brazil Case Study'' Chauhan, Krupesh A.; Shah, North Carolina; Yadav, S. M. (India): ''Affordability and level of care for the weaker economic section group: a case study from the city of Surat (India)'' 97 Giberti, Hermes; Cinquemani, Simone; Legnani, Giovanni (Italy): ''A generalized definition of Jacobian matrix for mechatronic systems'' 102 Giberti, Hermes; Cinquemani, Simone (Italy): ''Classification of servomotors based on the acceleration factor'' 107 Hidalgo, Ieda G. *; Soares F., Secundino*; Fontane, Darrell G. **; Cicogna, Marcelo A.*; Lopes, João E. G. * (* Brazil, ** USA): ''Impact of hydroelectric plant data quality on the analysis of past operations using a medium-term simulation tool'' 113 Kim, Dean H .; Reyer, Julie A. (USA): ''Design of a palpation simulator using magnetorheological fluids'' 118 ii

33Lobato Calleros, Odette; Rivera, Humberto; Serrato, Hugo; Gold, Federico; Leon, Christian; Gomez, Ma. Elena; Cervantes, Paula; Acevedo, Adriana; Méndez Ramírez, Ignacio (Mexico): ''Design and Implementation of a Methodology for the Establishment of the Mexican Customer Satisfaction Index for Social Programs: The Case of the Subsidized Milk Program'' 123 Hammad, Mahmoud Tarek M. (Egypt): ' 'The challenge between traditional and environmental aspects facing modern architectural design, a case study' 129 Mubin, Ashirul; Luo, Zuqun (USA): ''Building a reconfigurable system with an integrated metamodel'' 136 O Steen, Billy; Brogt, Erik; Chen, XiaoQi; Chase, J. Geoff (New Zealand): ''Use of system sensing during the implementation of a new mechatronics engineering curriculum'' 140 Schwartz, Gilson; Thorn, Edison; de Almeida Amazonas, José Roberto (Brazil): ''Future Internet Challenges not related to engineering'' 146 Troudt, Edgar E.; Winkler, Christoph; Audant, A. Babette; Schulman, Stuart (USA): ''The Virtual Engineering Firm: A Framework for Entrepreneurial and Soft Skills Education'' 152 Vint, Larry Amo (Australia): ''Refocusing Engineering Design for an Entrepreneurial Environment sustainable living'' 158 Vornholt, Stephan; Köppen, Veit (Germany): ''Integrated and Data Driven Engineering for Virtual Prototypes'' 164 Engineering Training Anzúrez Marín, Juan; Torres Salomao, Luis A.; Lázaro, Isidro I. (Mexico): ''Fault detection and isolation for a bus suspension model using an unknown input observer design'' 170 Bhatnagar, Kaninika (USA): ''Engineering technology and Gender: Improving Voice and Access for Minority Groups by Curriculum Design for Distance Education'' 176 Rabe, Vlasta; Hubalovsky, Stepan (Czech Republic): ''New concepts in engineering education through e-learning'' 181 Tshitshonu, Eudes K. (South Africa): ''An online support approach for effective teaching and learning , a case study'' 186 Wellons, Jonathan; Johnson, Julie (USA): ''Planning and the novice programmer: how grounded theory research can lead to better interventions'' 189 Xie, Yimin; Wong, David; Kong, Yinan (Australia): ''Hardware resources in teaching digital systems'' 195 Engineering technology transfer Chowdhary, Girish; Komerath, Narayanan (USA): ''Innovations Needed for Short Range Retail Beam Power Transmission'' 200 iii

34Garcia-Martinez, Sigridt; Espinosa-Juárez, Elisa (Mexico): ''Analysis of the influence of distributed generation on voltage dips in electrical networks'' 206 Musella, Juan Pablo; Janezic, Gustavo; Branca, Diego; Lopez de Luise, Daniela; Milne, James Stuart; Ricchini, German; Milan, Francis; Bosio, Santiago (Argentina): ''Use of DSS in an Industrial Context'' 212 Ciencias e Ingeniería de Materiales Curatolo, S. (USA): ''Optically Tuning TC in Any Superconductor'' Hilerio C., I .; Barron M., MA; Hernández L., R.T.; Altamirano T., A. (Mexico): ''Wet and dry abrasion behavior of AISI 8620 Boriding steel'' 224 Wang, Chihong; Guo, Xuhong; Wang, Wei; Dong, Qu (China): ''Study on the influence of cutting parameters on cutting forces and chip shape of austempered ductile iron (ADI)'' 228 Mechanical engineering, including industrial engineering, operations research , Aerospace, Marine and Agricultural Engineering, Mechatronics, Robotics Li, June; Zhang, Shiyi; Yang, Lizhong (China): ''Analysis and Research of the Influence of Advanced Firing Angle on Engine Emissions as a Function of Fuel Quality'' Li, Ming *; Singh, Gurjiwan*; Singh, Gurjashan*; Garcia, Alberto*; Tansel, Ibrahim *; Demetgul, Mustafa **; Yenilmez, Aylin ** (* USA, ** Turkey): ''Swept Sine Wave Based SHM for Short Composite Tubes'' 237 Ngabonziza, Yves; Li, Jackie (USA): ''Electromechanical behavior of CNT nanocomposites'' Rakin, Marko *; Gubeljak, Nenad**; Medjo, Bojan*; Maneski, Tasko*; Sedmak, Aleksandar * (* Serbia, ** Slovenia): ''Application of structural integrity assessment software'' 246 Wen, Fuhliang; Lin, Jhenyuan; Wen, Hungjiun; Chang, Kuo-Hwa (Taiwan): ''Pulsed atmospheric pressure plasma system applied to PCB surface treatment'' 250 Role of technological innovation in economic development Arroyo, Pilar; Erosa, Victoria (Mexico): ''Understanding the disadvantages of technological support to promote the use of IT among small Mexican companies'' 255 Stancu, Stelian; Predescu, Oana Mãdãlina (Romania): ''Portfolio selection in the Romanian capital market in the era of e-commerce'' 260 Zalewski, Romuald I.; Skawinska, Eulalia (Poland): ''How are innovative activity and competitiveness related to the economic growth of nations?'' 266 Technological collaboration Castro, Sebastião; Grandioso, Armando (Brazil): ''Information Technologies and their Use in TCE-MG'' iv

35Ghazanshahi, Shahin (USA): ''System Identification Techniques for Array Flying Telescope Arrays'' 277 Omel'Chuk, Anatolii A. *; Yudenkova, Inna N. *; Qiang, Yang Li **; Wen, Huang Jian**; Maslo, Nikolay A. * (* Ukraine, ** China): ''Electrochemical decontamination'' 281 Innovation and technological development Forbes, Alex; Patel, Anant; Cone, Chris; Valdez, Pierre; Komerath, Narayanan (USA): ''A new look at hydrogen-powered supersonic aircraft'' 287 Franzellin, Vittorio M.; Matt, Dominik T.; Rauch, Erwin (Italy): ''The value of the (future) customer in focus. An axiomatic design method combined with a Delphi approach to improve the success rate of new strategies, products or services'' Fung, Pik-Chi James *; Lam, Wai-Pui Victor *; Ma, Li Patrick*; Chan, Chuen-Yu John ** (* Hong Kong, ** China): ''Performance Study on Nitrogen and Organic Leachate Removal Using a Two-Stage Oxic-Anoxic Biological Aerated Filter (OABAF) System'' Wu, Chihming; Yu, Wen-Der; Cheng, Shao-Tsai (Taiwan): ''Preliminary study on the model for the automatic generation of innovative alternatives'' 307 Energy and information technologies Micheletti, Roberto (Italy): ''Digital filtering for distance protection of power through Walsh functions'' 313 Power engineering Gan, Yong X. (USA): ''Nanoporous nickel for electrochemical energy conversion'' Sues, Anna; Veringa, Hubert J. (The Netherlands): ''Selection of the best biomass to bioenergy conversion route for implementation in the European energy sector. An Integrated Analysis of Efficiency, Economics, and the Environment'' Power Systems/Technologies Ding, Jinxu; Somani, Arun (USA): ''Analysis of the impacts of transmission line capacity expansion on the development of clean energy systems to implement clean energy policies'' 330 Klementavicius, Arturas; Radziukynas, Virginijus; Radziukyniene, Neringa (Lithuania): ''Assessing risks of cross-border propagation of imbalances in relation to reduced generation reserves'' 336 Sociopolitical, economic and environmental contexts of energy systems and technologies Behr, Joshua G.; Díaz, Rafael (USA): ''Modeling and simulation of public health based on the evolution of the electric energy portfolio of southeastern Virginia'' 342 v

36Fagiani, Ricardo*; Marano, Vincenzo**; Sioshansi, Ramteen ** (* Italy, ** USA): ''Cost and Emissions Impacts of Plug-in Hybrid Electric Vehicles on the Ohio Power Grid'' 350 Townley, Christopher; Howe, Joe (UK): ''Strategic Planning for Energy: Too Little, Too Late?'' Author Index vi

37Pentahedral honeycomb with skewed hexagonal faces Edgars BERVALDS Latvian Academy of Sciences, Akademijas laukums 1-205, Riga, LV-1050, Latvia Modris DOBELIS Department of Computer Aided Engineering Graphics, Riga Technical University, Azenes iela 16/20-439 , Riga, LV -1048, Latvia ABSTRACT This paper proposes a generally new type of macro- and mega-space filler honeycombs that have quasiregular pentahedral cells with oblique hexagonal faces. The existence of spatial cells called pentahedra is demonstrated by hexagonal prismatic honeycomb topological transformations and is based on a recently discovered Phi relationship within a regular hexagonal tiling. A geometric symmetry of this honeycomb is studied, filling the space without gaps or overlaps. Finally, it is pointed out that abstract or skeletal analogues of the pentahedral honeycomb have effective practical uses through the synthesis of artificial man-made macromedia, especially large-scale orbital structural systems. Keywords: Topological Transformations, Oblique Faces, Pentahedral Honeycomb 1. INTRODUCTION Two previous discoveries are on the ground of the need to carry out these investigations within macrostructural spatial geometry. The first is framed within a structural mechanics. In 1993 it was shown that the stiffness components of the topological invariants of a spatial bar system are not identical by the volume of identically used material [2]. The numerical value of this feature is the highest for a cell such as a diamond lattice, exceeding for example 5.9 times a topological stiffness component of a traditionally used triangular lattice. In 2007 it was discovered that the vertices of regular hexagonal tilings are Phi centers with a very low variance corresponding to the 11 series of the Fibonacci convergence sequence on Phi [3, 4]. It means that the existence of such a rational geometric relationship could ensure the highest specific mechanical rigidity of exactly the hexagonal structure. Thus, here is a geometric problem for constructing a honeycomb as a spatial analogue of the flat regular hexagonal tiling. distance between the horizontal base planes (to be determined below). Figure 1: A layer of hexagonal prisms: (0) and (-1) horizontal planes of even and odd vertices, respectively; a length of edges of hexagons; and thickness of a layer. We further form a hexagonal prismatic honeycomb by placing the layers on top of each other so that the bases of the prisms on the neighboring layers coincide. The horizontal planes between layers are labeled with consecutive integers: -4, -3, -2, -1, 0, 1, 2, 3, 4 We also color the vertices of the grid black and white to be the ends of each horizontal. edge have a different color, while the ends of each vertical edge have the same color. This coloring is used to describe a certain deformation within the shell: (a) we move all black points lying in the planes labeled with even integers (including 0) vertically up by an amount x, and all points whites down by the same amount x (x to be determined later); (b) we act in reverse (black vertices down, white up) in the odd planes (Fig. 2). 2. TOPOLOGICAL TRANSFORMATIONS OF A HEXAGONAL PRISMATIC HONEYCOMB We begin by building an infinite horizontal layer of regular hexagonal prisms (Fig. 1). Let a be the length of the edges of their base hexagons, and let y be the height of each prism, that is, the amount of vertical displacement of the vertices. 1

38During the topological transformation, the vertical edges of the old hexagonal prismatic honeycomb are divided into two groups, such that one group consists of the longest (y + 2x) and the other shortest (y-2x). It means that instead of 6 vertical rectangles of each previous hexagonal prism we have obtained 3 regular hexagonal faces once convex, but instead of 2 horizontal hexagons, 2 regular faces twice convex. In this way, we obtain a peculiar polyhedron of a new kind with five oblique hexagonal faces, which we will call a pentahedron. Now we are going to determine the corresponding values ​​of the parameters x and y. Let us observe a vertex O inside the obtained pentahedral honeycomb (Fig. 3) connected with vertices J, N, C and H. In this figure, point I moved up and points J, N and H moved down from their common plane, while point C moved downward from the nearest upper plane. Point I now has only four adjacent vertices (in contrast to five in the previous hexagonal prismatic honeycomb). Since we want the four edges incident to I to be of equal length, and the six angles between those edges to be the same size, the JNHC pyramid has to be regular and point I has to be its focus. Figure 3: The pentahedral honeycomb: regular tetrapod circumscribed by a triangular pyramid. Let's draw a regular triangular pyramid JNHC with focus I, mark the center of its base triangle with M and the midpoint of the edge JN with R (Fig. 3). Remember that a was the initial length of the edges of the hexagons, x was the amount of vertical offset of the vertices, and y was the thickness of a layer (ie the length of the vertical edges). Therefore, IM = NM = HM = a, IM = 2x and MC = y. Since the pyramid is regular, the edges of the incident pentahedron at I have the same length, which we denote by z: JI = NI = HI = CI = z. The angles between these lines are also equal; let α be its size. Then JIN = NIH = HIJ = JIC = NIC = HIC = α. Next, we will determine the values ​​of the parameters x and y defined above so that the JNHC pyramid can be regular, in fact. Using eg. (1) - (8) and carrying out the calculations, finally we will also obtain the value of z with respect to y and x as well as the measure of the angle α: JN = NH = JH = JC = NC = HC = 3a (1) y = MC = NC MN = 3a a = 2 a (2) 1 1 IM = MC = 2a (3) 4 4 The focus of a regular pyramid divides its height in the ratio 1:3. Then 1 1 x = IM = 2a (4) a MN a a 3 IR = IM + MR = + = + = a (5) = 2 arctan NR = JN = a (6) 2 2 NR α = 2 RIN = 2 arctan = 2 arctan IR 2 109, = a 3 3 z = JI = JM + IM = a + = a = 2 a (8) In summary: the pentahedral honeycomb is a structure determined by a single parameter z. To build this structure, a hexagonal prismatic honeycomb must first be built. In the latter, the length of one edge of each 4 2 hexagon is a = z = 2z, the height of each prism (that is, the length of its lateral edges) is y = 2 a = 2 2 z. 3 Then the vertices of the prisms have to be displaced by x = a = 2z = z. In this way, the faces of the prismatic hexagonal honeycomb are transformed into oblique hexagonal faces of the pentahedral honeycomb. All angles between adjacent edges in the pentahedral honeycomb will be α = 2arctan. Let's look at one more geometric relationship within a pentahedral cell that has oblique hexagonal faces. In Fig. 2 a deformed side face ABHG of an initial hexagonal prism is represented. It is an equilateral trapezoid; the sides AG, GH and AB are edges of a cell in the newly constructed pentahedral honeycomb, AGH = GAB = α = 2arctan and AP is the height 1 of the trapezoid. We should check that BP = 2 x = z. 3 In fact, π BP = AB sin PAB = AB sin( GAB ) = 2 π 1 = z sin(2 arctan 2 ) 0, z z PENTAHEDRAL HONEYCOMB SYMMETRY Symmetry, being the most inherent property of pentahedra and spatial pentahedral tilings, has been studied both at the level of a separate cell and of a honeycomb as a whole by means of symmetry groups (Fig. 4). The pentahedron has 12 vertices, 15 edges, and 5 faces of the oblique hexagons ABCDEF, GHIJKL, ABCIHG, EDCIJK, and AFEKLG. In a pentahedral honeycomb, each vertex is incident (7) 2

39to 4 equal edges, and the angle between any two of them is ( 2 ) 109.47. α = 2 arctan Figure 4: In a pentahedral cell symmetry: fold lines of oblique hexagonal faces with midpoints X and Y; triangular prism, symmetry vectors. 3.1 Polyhedral symmetry group The symmetry group of a pentahedron, that is, the group of orthogonal transformations of a space, which map the figure onto itself, is isomorphic to the symmetry group D 3 of a regular triangular prism. This follows from the fact that every orthogonal transformation that maps a pentahedron onto itself is completely determined by an orthogonal transformation that maps the ACEGIK regular prism onto itself. The following transformations are the generators of the symmetry group of a pentahedron: a) Rotation of 120 about the central vertical axis of the pentahedron, b) Symmetry about the central horizontal plane, c) Symmetry about the XY line. 3.2 Symmetry of the honeycomb There are three independent vectors whose translation traces a honeycomb on itself. These translations are the basis elements of the translation group of the honeycomb which is 3 isomorphic to the free abelian group Z = Z Z Z. More specifically, the basis consists of a) The vertical translation along the vector 5 8 AG + FL = AG + AG = AG, 3 3 b) The translation in the horizontal plane according to the vector AC, c) The translation in the horizontal plane according to the vector AE. The (complete) symmetry group of the honeycomb is generated by a) D 3 generators, b) Z 3 generators, and c) Rotation of the honeycomb by 60° about the vertical axis of a pentahedral cell. An infinite pentahedral tessellation includes space tunnel structures with oblique hexagonal faces. They have sixfold spiral symmetry, since the packed polyhedra along the vertical axis repeat after both a rotation of 60 and a translation of a distance equal to the length of each edge. The main characteristics of the symmetry properties of a pentahedral honeycomb are given in Table 1. Table 1. The main characteristics of the symmetry properties of a pentahedral honeycomb. Without Characterization Explanation 1. Convex uniform honeycomb type 2. Family of polyhedra with oblique hexagonal faces 3. Cell type {6.4} 4. Face type {6} 5. Schläfly symbol {6.6} 6. Coxeter group (D 3 , Z 3, Z) 7. Coxeter Dunkin diagram 8. Cells / edge {4.3}4 9. Faces / edge Cells / vertex {4.3} 11. Faces / vertex Edges vertex Double triangular bipyramid 14. Vertex figure tetrahedron 15. Angle internal Symmetry group D3 17 Other properties isogonal and isotoxal polyhedra, n=6 times the helical symmetry of Z-tunneling 4. IN A SKELETAL APPROACH TO PENTAHEDRAL HONEYCOMB The challenge of designing and building new solid-state crystalline materials from blocks Molecular construction had started successfully [6]. This success related to a lattice (or chemical) synthesis of robust materials with highly porous structures and with predetermined chemical properties had been achieved through the investigation of abstract micro- and nanostructural (skeletal) polyhedra. This same conceptual approach could be employed for the creation of non-predetermined macro and megastructural skeletal systems by predetermining mechanical properties such as minimum mass or maximum stiffness. Branko Grünbaum made a special study of abstract polyhedra, in which he developed an early idea. He defined a face as a set of cyclically ordered vertices, and allowed faces to be skewed and planar [5]. Also, in modern computer graphics, any polyhedron gives rise to a graph or skeleton with corresponding vertices and edges. 3

40It has been shown in a chapter 2 that the macrospatial pentahedral honeycomb has the same skeletal graph or skeleton as a nanospatial lonsdaleite or hexagonal diamond, whose vertices are like tetrapod-shaped junctions. Allows you to create structural building constructions such as hybrid bar systems. Therefore, the tetrapod-shaped connections of bars or super finite elements of this system should be layers without deflection moments as the most effective macro and mega construction from the point of view of the volume of material used [1]. 5. CONCLUSIONS 1. The pentahedral honeycomb has been obtained by topological transformations (stretching) of a regular prismatic hexagonal tiling and is quasi-regular (vertex and edge transitive) with cells that have three regular faces once convex and two regular twice convex. 2. Pentahedral honeycomb is a third type of discrete symmetry groups equipped with a topology or infinite space group that combines elements of point groups and lattice groups and also includes an additional transformation such as screw axis. 3. Pentahedrons are uniform polyhedra homeomorphic to hexagonal prisms consisting of regular oblique hexagonal faces and congruent vertices. So they are exactly the same size and shape and are second-space tiling polyhedra after a cubic one, till space without holes or overlaps. 4. Widespread use of pentahedral lattices could be envisioned for synthesis of minimum mass and maximum structural rigidity on a large scale, especially orbital systems or constructions on Earth's natural satellites. ACKNOWLEDGMENTS The authors are grateful for the financial support of the Latvian Science Council grant No /230 “Investigations of geometric, topological and mechanical properties within the spatial analogues of hexagons”. REFERENCES [1] E. Bervalds, Hybrid Rod Constructions of Steerable Mirror Antennas. Documents of the URSI meeting in Riga, vol. 1, 4-6 September 1990, [2] E. Bervalds, Topological transformations and design of structural systems. proc. of the World Congress on Optimum Design of Structural Systems. Rio de Janeiro, Brazil, vol. 1, August 2-6, 1993, pp. [3] E. Bervalds, Existence of the Phi relation within a regular hexagonal tiling. J. Latvian Physics and Technology. Sciences, 2007, No. 2, pp [4] E. Bervalds, M. Dobelis, Geometric properties of a regular lattice caused by the Phi relation. proc. of the XIII International Congress of Geometry and Graphics. Dresden, Germany, August 4-8, 2008, 8 p.m. [5] B. Grünbaum, Are your polyhedra the same as my polyhedra? Discrete and Computational Geometry. Goodman Pollack Festschrift, eds. Springer, New York, 2003, pp. [6] O.M. Yaghi, MO Keeffe, N.W. Ockwing, H. K. Chae, M. Eddaoudi, J. Kim, Lattice synthesis and design of new materials. Nature, (423), 12, 2003, pp.

41DNA Separation by Directed Optical Transport Avital Braiman, 1 Fedor Rudakov, 2 and Thomas Thundat 3 1 Division of Engineering, Brown University, Providence, Rhode Island, 02912, USA 2 Division of Computer Science and Mathematics 3 Division of Life Sciences Laboratory Oak Ridge National, Oak Ridge, TN, 37831, USA A wide variety of biological and medical applications require the separation of biomolecules of a particular size from a mixture. Separation is usually achieved by electrophoretic transport of charged biomolecules through a sieving medium with a uniform electric field. The separation of biomolecules using a uniform electric field has serious drawbacks, the most significant being that to separate biomolecules of a particular size requires a complete separation of the sample by size. Furthermore, since biomolecules are subject to diffusion, the separation is accompanied by a broadening of the band. We present an alternative design for biomolecule separation that allows for the selective separation of biomolecules from a mixture (complete separation of the sample by size is not required) 1 . Furthermore, our design allows for the independent translocation of biomolecules of different sizes along of two-dimensional pathways keeping the biomolecules concentrated. Separation is achieved by optically directed transport of biomolecules through a sieving medium. In our design, a laser beam is projected onto a photoanode that is in contact with an electrolyte, thus creating a highly localized electric field trap. Charged biomolecules within the electrolytic medium migrate towards the center of the photoelectrophoretic trap. By moving the focus of the laser beam along the photoelectrode, the trap is displaced and, consequently, migration of the trapped biomolecules within the medium occurs. When the photoelectrode is in contact with a sieving medium such as gel, the ability of the biomolecules to follow the trap is size dependent. For a sample consisting of biomolecules of different sizes, the speed of the photoelectrophoretic trap can be selected such that the sample is divided into two molecular packets. One molecular bundle contains biomolecules with mobilities higher than the minimum mobility required to follow the trap, while the biomolecules in the other molecular bundle have lower mobilities and therefore cannot keep up with the trap. Once the trap has moved a significant distance from its initial position, the biomolecules outside the trap do not experience any substantial electric field and consequently remain close to their initial position. Therefore, to separate biomolecules of a particular size from the mixture, the speed of the trap must be selected such that only the biomolecules of interest follow the trap, while the next largest biomolecules would be unable to keep up. the trap. The speed of the photoelectrophoretic trap must subsequently be increased so that only the biomolecules of interest fall out of the trap while the other biomolecules continue to translocate with the trap. We experimentally demonstrate the concentration and separation of a sample into two concentrated molecular packages consisting of DNA fragments of different sizes. Photoconcentration and separation was performed on 1.5% agarose gel. The diameter of the photoconcentrated sample was 470 m, which corresponds to only ~5% of the area of ​​the original sample. In addition, we performed Monte-Carlo simulations of DNA separation. We solved the Langevin equation for a DNA particle subjected to diffusion and electric field and obtained a qualitative agreement between our computational and experimental results. We believe that well-established electrophoretic techniques can be integrated with photoelectrophoretic transport to gain more control over the separation process and achieve higher resolution. [1] a. Braiman, F. Rudakov, and T. Thundat, Highly Selective Separation of DNA Fragments by Optically Targeted Transport Applied Physics Letters, Vol. 96, No. 5, 2010, p. 53701/14. 5

42A Hybrid Approach to Malay Text Abstracting Norshuhani ZAMIN, Arina GHANI Department of Informatics and Information Sciences Universiti Teknologi PETRONAS Bandar Seri Iskandar, Tronoh, Perak, MALAYSIA. SUMMARY Summarizing is the art of generating the main points of a long text document by removing redundant and less important information without losing the meaning of the original text. The summaries are significantly shorter than the original text and have a broad overview of the source material. With today's increasing volume of digital information, people find the manual summarization process hectic and time consuming. Having an automated text summarization system for electronic documents would go a long way in encouraging people to read, providing quick access to information and thus helping them speed up the decision-making process. Although there are many research and business text summarization tools available, no research is officially reported for the Malay language. Malay text summaries are gaining in demand now that a large amount of Malay-language information is freely accessible over the Internet. This paper presents a hybrid approach for an automated text abstraction system for the Malay language. The base system is based on the SUMMARIST system and is extended by combining with the EstSum system. Experimental results show that expanding the training data size significantly contributes to performance. Overall, our system produced acceptable results at best 76% and worst 31%. Keywords: Malay text abstracting, statistical approach, text mining, natural language processing. 1. INTRODUCTION Based on the Malaysian Reading Profile Survey in 1996 conducted by the National Library of Malaysia, the average reading activity for Malaysia is approximately two books in a year. A further survey conducted in 2005 on 60,441 Malays reported that the problem has not shown significant improvement. Malaysia lags far behind among most well-developed countries in reading activities. One of the main causes of this problem is that reading habits develop very slowly in low-income families compared to higher-income families. The multiplication of Malay digital texts on the Internet motivates us to develop an automated summarization system to encourage the reading habits of Malays by consuming less time to read long documents. The earliest research on text abstracts was done in the 1960s, and growing interest in this research continues in recent years. Most of the work found is for English text summaries, but with the increasing demand for this tool for other languages, several research and development works were found for Estonian [1] Scandinavian [2], Thai [3], Persian [4], Swedish [5] and the five-in-one system known as SUMMARIST [6] for summarizing text in English, Japanese, Arabic, Spanish, Indonesian and Korean. An abstract is classified into two types: 1) indicative abstract and 2) informative abstract [7]. The indicative summary is a summary that highlights only the topic of the text, while the informative summary describes the central information of the text. As this research deals with the synopsis of the text, the results should produce an informative summary. However, the development of a text summarization tool for the Malay language which has a totally different grammatical structure than English is not only the challenge, but also the accuracy of the results will be the important issue to be discussed. How good a summary is depending on the percentage of essence preserved in the summary and the cohesion between one sentence and another. Since it is difficult to measure the quality of Malay abstracts without existing baseline research, the result of this research will be compared with the analysis of human-made abstracts. Therefore, the goal of this research is to develop a Malay text summary that is accurate to at least 60% similarity to manual summaries. The development and evaluation considers summaries of various types of documents such as news articles, magazines, reports, and storybooks. 2. RELATED WORK In this section we briefly present part of the research literature related to our work. Text mining Text mining refers to computational methods for discovering previously unknown meaningful information from unstructured text. Text mining is closely related to data mining and finds interesting patterns and trends in large data sets. The only difference is that text mining deals with natural language text, while data mining requires structured databases or facts. The purpose of text mining is to link the extracted information to form new facts or new hypotheses to explore further [8]. In recent years, text mining research covers various areas including term association discovery, document clustering, text abstraction, and text categorization. Text mining consists of three basic steps: 1) Text preparation, the preprocessing of text to extract significant terms or features 2) Text processing, the use of computational methods to identify interesting patterns in preprocessed text 3) Text analysis , the evaluation of the extracted output [9] . 6

43Text summarizing Text summarizing is the process of extracting the most important information from a text and producing an abbreviated version for a particular task and user [23]. Automatic text summarization refers to the use of computational methods to automatically derive the summary of a given text. For the past half century, the natural language processing (NLP) community has explored abstract text research. The increasing availability of information online has required intensive research in this area. There are two main methods for abstraction, abstraction and automatic text extraction [10]. Abstraction is a difficult but promising technique in which you generate new sentences from the original sentences through a process called paraphrasing. This technique involves the syntactic and semantic study of the particular language and is useful for meaningful applications. On the other hand, the extraction method has been the current state of the art and commonly used by most existing tools. This method weights each sentence of the original text with some specific characteristics and selects the original sentence and juxtaposes it in the summary. The basic extract-based summary where each sentence is measured through predefined special properties, selecting the most relevant sentences based on the value of the properties and putting them together in a summary. This research learned the basic technique based on the extraction of [11]. An investigation in [12] classified automatic text summarization approaches into three: 1) Superficial approach the simplest of all approaches where a summary is produced by extracting sentences from the information source. However, the challenge in this approach is to preserve the original context when sentences are broken 2) Deeper Approach This approach produces a summary called an abstract where part of the summary text may not be found in the original text. Find the most specific generalization of concepts from the texts and use it for the summary and 3) Hybrid approach, a combination of extraction techniques with natural language processing techniques. This research proposed a hybrid approach using the morphological and part-of-speech labeling methods of SUMMARIST [6] and the statistical scoring methods of EstSum [1]. Among the objectives of text abstraction are the abstraction of a single document and of multiple documents [23]. In the single document abstract, an abstract is produced that characterized the content of a single document. While the multiple document summary takes a group of documents as input and a condensation of the contents of the entire group is produced as a summary. Summarizing multiple documents has turned out to be much more complex than summarizing a single document. This research focuses on the abstract of a single document. Malay Text Summarizer Malay is not only a native language of Malaysia, but also one of the languages ​​used in Indonesia, Brunei, Singapore, and southern Thailand. The Malay language is rich in colloquial and idiomatic expressions and literary allusions and, like other languages, has its own unique structure and grammar. As the Malay language is used in the Southeast Asian region, it has become one of the least resourced languages ​​in the world. Due to this, a limited number of computational linguistic research related to the Malay language was found. Although there are many studies related to the Malay language, however, as far as we know, none have been officially reported on Malay text abstraction. However, there are other computational linguistic studies for the Malay language, such as information retrieval [13], proofreading [14], novelty detection [15] and machine translation [16]. An open source tool for Malay language corpus analysis was recently found in ongoing research [17]. The tool provides access to Malay parts of speech tokenizers, stemmers, and taggers that are vital to Malay linguistic research. Other text abstracts SUMMONS [19] is the first example of a multiple document abstraction system. It summarizes news articles on terrorism from different news agencies and produces a report that merges relevant information about each identified event. The SUMMONS architecture consists of two main components: 1) Content Planner selects the relevant information from the combination of input templates with predefined semantic instantiated spaces and 2) Linguistic Generator selects the correct words to express the information in a grammatical and coherent text. An automated text summarization system called EstSum [1] is capable of summarizing Estonian-language newspaper articles. Build short summaries of text by selecting the key sentences that describe the document. Sentences are classified using a weighted combination of statistical, linguistic, and typographical aspects such as sentence position, format, and type, and the frequency of occurrence of each word in the system. Achieves up to 60% accuracy in evaluation performed against human-made summaries or newspaper articles. SUMMARIST [6] aims to generate summaries and extracts for arbitrary texts in English and other languages. In this research, the extract is defined as portions taken verbatim from the original (they can be single words or complete passages) and the abstract as novel sentences that describe the content of the original (which can be paraphrases or completely synthesized text). SUMMARIST combines statistical techniques and symbolic word knowledge derived from WordNet [18], a large English lexical database. His technique is based on the following equation: abstract = topic identification + interpretation + generation (1) The purpose of topic identification is to filter input that retains only the most important central topics using various techniques such as stereotyped text structure, words key, high frequency signal phrases and discursive structure. Interpretation processes themes, reformulates and compresses them. This process is vital to achieving greater compactness and removing redundancies, rephrasing sentences, and merging related topics into a more general one. The generation process aims to reformulate the interpreted data in a new text. The SUMMARIST architecture is illustrated in Figure 1.7

44Step 1: The browser sends a summary request to the web server where FarsiSum is located. The URL of the document to be summarized is attached to the request where the original text is in Unicode format. Step 2-5: The document is summarized in three phases similar to SweSum. The words in the document are compared with the words in the Persian stop list. Step 6: The digest is returned to the HTTP server. The browser then displays the summarized text on the screen. Figure 1. SUMMARIST architecture SweSum [2] is a Swedish text summary. The sentences are extracted based on a combination of linguistic, statistical and heuristic methods. SweSum works in three different steps: 1) Keyword Tokenization, Scoring and Extraction – The input text is broken into sentences. Word boundaries are identified by looking for periods, exclamation points, and question marks. The sentences are then scored using statistical, linguistic, and heuristic methods. 2) Sentence Ranking: The score of each word in the sentence is calculated by a set of parameters, which can be adjusted by the user, and the total score is accumulated. Sentences containing common content words score higher and 3) Summary Extraction, the final summary file is created in HTML format. These processes are represented schematically in Figure 2. The lexicon is a database consisting of key/value pairs where the key is the inflected word and the value is the stem/stem word in Swedish. Figure 2. Architecture of SweSum FarsiSum [20] is a text summary for Persian built on the basis of SweSum modules. The system is implemented as an HTTP client/server application. The FarsiSum tokenization module uses the Persian block list in Unicode format and a small set of heuristic rules. The stop-list is a file that includes the most common verbs, pronouns, adverbs, conjunctions, prepositions, and articles in Persian. Figure 3 shows the FarsiSum architecture with each of the summary steps numbered accordingly. The system is located on the server side and the client is a browser. The summary steps are described below: Figure 3. FarsiSum Architecture 3. PROPOSED APPROACH In this research, we chose to work with the extraction method. We believe that with further refinement of the formula to be discussed in this section, we can maximize the retention of important information in the Malay text. We have considered adopting some existing successful investigation techniques. Basically, we divided the entire summary process into three phases: 1) Pre-processing 2) Text extraction and 3) Sentence selection. In the Preprocessing phase, we use the technique introduced in SUMMARIST [6]. The preprocessing algorithm considers only two modules in SUMMARIST: 1) Tokenizer and 2) Token Frequency Counter. There are two forms of tokenizer which are word tokenizer and sentence tokenizer. The word tokenizer chunks each and every word in the input text to produce a set of tokenized text. The boundary of each word is determined based on the white space found between the words. While the sentence tokenizer breaks the text into sentences taking the period (.) as the boundary between the sentences. Malay text tokenizer algorithm is developed to recognize Malay words. In the Token Frequency Counter module, the number of occurrences of each word that appears in the original text is counted. The highest frequency will be considered as the keyword of the text. However, for some of the commonly used words, such as articles, will be ignored (English) = itu (Malay) and is (English) = ialah (Malay). From this word frequency range, we select the first 10 words with the highest number of occurrences. Referring to these selected words, all sentences containing 8

45of these words will be merged as a preprocessed text. The original text is now simplified based on the frequency score of the words. In the text extraction phase, we apply the Edmundson statistical formula [21, 22] shown below and our main reference is the recent research EstSum [1]: W(s) = αp(s) + βf(s ) + γk( s) Where, W(s) weights the sentence function s; P(s) position-based scoring function; F(s) format-based scoring function; K(s) keyword-based scoring function; α, β and γ are constants. P(s) + F(s) + K(s) (2) The values ​​of α, β and γ that have been previously fitted by hand using a manually created training corpus, act as fitting parameters. A merge module evaluates (2) to give each sentence a unique weight. Scores are computed based on three properties identified by experts: 1) Position the score given to the location of the sentence. Regularities in the text structures of many genres are useful for classifying sentences according to their location in the text. For example, the first sentence that appears in the text tends to contain important information and will therefore be given a higher score 2) Format: Score is given for the style and format of the font. For example, the word written in bold or italics will receive a higher score as it addresses the importance of the word and 3) Keyword, the score given to the frequency of the word appearing in the text. The expression P(s) + F(s) + K(s) in (2) is a normalization factor. The constants α, β and γ act as fitting parameters that have been previously hand-tuned by a manually created training corpus and are used to evaluate P(s), F(s) and K(s) respectively. Total punctuation gives a sentence a unique weight. Finally, in the Sentence Selection phase, considering some threshold values, the sentences with the highest score will be merged and taken as a summary. The general architecture of our proposed work is shown in Figure 4. Methods Our experiment requires a training corpus. Due to the non-existence of a Malay corpus for text abstracts, we created our own corpus consisting of abstracts compiled by four Malay experts. A total of 10 original Malay news articles covering general, business and sports news were given to each of the four Malay experts for manual summarization. Each expert submitted 10 abstracts limited to 30% of the original text length, resulting in 40 hand-created abstracts. The length of the source text varies depending on the type of text. On average, it consists between words. The process is illustrated in Figure 5. Original article Original article Perbuatan merosakkan harta benda awam atau vandalisme kian hari menular di negara kita. Harta benda awam telah dirosakkan sewenang wenangnya. Perbuatan yang tidak bertanggungjawab ini telah mengundang pelbagai komplikasi. Pihak kerajaan telah membelanjakan berjuta juta ringgit untuk memperbaiki kemudahan awam yang telah dirosakkan oleh individu yang tidak mempunyai perasaan memiliki setiap kemudahan yang disediakan. Banyak perkara yang dapat dilakukan oleh masyarakat untuk membanteras vandalisme daripada terus menerus berleluasa. Pendidikan merupakan tunjang atau paksi utama yang dapat mengubat penyakit sosial ini. Di peringkat sekolah, pendidikan diharap mampu mencambahkan kesedaran cinta terhadap harta benda awam. Di samping itu, masyarakat setempat pula perlu prihatin dan berasa terpanggil untuk mengambil tindakan tanpa perlu mengharapkan pihak berkuasa membuat tangkapan. Dalam hal ini, Jawatankuasa Perumahan yang berkenaan perlu bergabung tenaga untuk membuat kawal selia di kawasan masing masing. Kepada pesalah pesalah vandalisme pula, hukuman yang lebih berat perlu dikenakan sebagai suatu jalan pencegahan yang paling baik. Ada cadangan daripada pihak masyarakat yang mahu supaya para pesalah didenda melakukan kerja pembersihan di kawasan kawasan awam, fed up benda awam. Figure 5. Training Corpus Creation Process Scoring Mechanisms Expert #1 Expert #2 Expert #4 Expert #3 Summary Articles Summary Articles Scoring criteria baseline is obtained from [ 1] with minor modifications made to accommodate the different structures of the Malay text. As our corpus of training extracts is relatively small (only 40 abstracts), we manually examined and compared each of the original texts and their abstracts. For position-based scoring, we assigned the appropriate weight to a sentence by investigating the placement of the sentence that appeared in the abstract using the following rules: 1) The first 3 sentences of the original text 2) The first 3 sentences of each paragraph in the original text and 3) The first 3 sentences after each subheading in the original text. An example of a position-based score for one of the summaries is shown in Table 1. Table 1. Example of a position-based score Figure 4. Malay Text Summarizer architecture 4. EXPERIMENTS This section briefly describes the methodology used in evaluating our system. Feature % in Excerpt Given Score (W(s)) 1st sentence in article 2nd sentence in article 3rd sentence in article 1st sentence in paragraph 2nd sentence in paragraph 3rd sentence in paragraph 1st sentence after subheading 1st sentence after subheading 3rd sentence after subheading

46In addition to position-based scoring, we hypothesize that a paraphrased sentence found in an expert's summary will be treated as a multiple sentence during manual investigation. Paraphrasing allows you to join several sentences in the author's own word, which is essential in natural language. This has been a challenge for text abstract research for over a decade [24]. However, research in [25] is found to be a promising start for automatically generating sentence paraphrases. At this stage of development, we do not consider any paraphrasing in the summary generated by our system. For format-based punctuation, we consider the sentence based on font style (default, bold, italic) and punctuation marks (exclamation marks, question marks, double quotes). Unlike [1], we excluded figure caption punctuation and text author, as they were not present in our training data. While for keyword-based scoring, we use a general Malay word frequency table that is generated from the original 10 texts by our token frequency counter module. This helps to estimate whether the word appears more frequently in the original text than in the summarized text. This scoring follows the following rules: 1) Words belonging to the title (article title) and subheadings receive higher scores and 2) All other words have a similarly lower weight. Evaluation metrics The comparison of the abstracts generated by our proposed system is made with the abstracts of human experts due to the non-existence of commercially available Malay text abstracts. A survey in [26] describes and compares various human and automated metrics for evaluating summaries. We employ performance measures commonly used in the traditional natural language processing task, recall, accuracy, and F1 score. These scores quantify how close the extract is to the system of the human. Precision shows the accuracy of the retrieved sentence, Recall reflects how many good sentences the system has missed, and F1 Score is a weighted average of Precision and Recall [27]. Given an input text (original text), a human-made summary, and a system-generated summary, the following metrics apply: Accuracy = correct / (correct + incorrect) (3) Recall = correct / (correct + wrong) (4) F1 Score = 2 x (Precision x Recall) / (Precision + Recall) (5) Where, correct the number of phrases extracted by the system and the human; incorrect: the number of sentences extracted by the system but NOT by the human; lost - the number of sentences extracted by the human but NOT by the system. The generated summary is considered correct if it contains sentences that were tagged in the human summary, or partially correct if the summary provides enough context for the passage. The generated abstract is judged incorrect if the necessary context was completely misleading or if the abstract did not contain the expected passage at all. Finally, the generated summary is judged incorrect if there is not enough context for the passage. A standard marks a sentence as in the abstract only when all four human experts agree. 5. RESULTS Table 2 shows the performance of our system. The second, third, and fourth rows of the table show the statistics for the three test collections. All abstracts agreed to a fixed-length compression rate of 30%. Type of Test Collection No. of Document Average no. of judgments per document Average no. sentences per summary Average accuracy (%) Average recall (%) Average F1 score (%) Table 2. System performance evaluation General News Business News Sports News The average number of sentences per summary in General News is relatively high in compared to the other test collections because the number of sentences in the body of the documents is higher. Consequently, Domestic News performs better on average than the other test collections. The reasons, based on the feedback provided by the human experts, why Domestic News outperforms the others are as follows: 1) They are different from each other in genre 2) Business News provides extremely comprehensive analysis, like the stock market, forex market and mutual funds. Regularly repeated technical expressions lowered the average score and 3) most system-generated summaries are difficult to understand. 4) The number of training data is relatively small to draw final statistical conclusions. However, more research is needed in the future to reveal clearer reasons. Accuracy is any natural language applications, such as text summarization, machine translation, speech processing is always a big problem. As for text summaries, evaluation is an important aspect to ensure whether the system has achieved the goal of resembling human-made summaries. In fact, naturally, it is difficult to find two similar human-made summaries for the same language in the world. A study in [28] found that, at best, there was an average 70% agreement between two human-made summaries. On average, our system produces summaries that are approximately 50% similar to summaries created manually. Although the agreement between the computer generated abstract and human abstracts is quite low, it can be a promising start for a Malay text abstract investigation. 6. CONCLUSIONS AND FUTURE WORK This article presents a Malay text summarization system using a hybrid approach, the preprocessing module introduced by SUMMARIST [6] and the statistical scoring methods described in the EstSum text extraction module [1 ]. The experiment shows that by using the combination of both techniques, the system can extract the most important sentence from Malay news articles. 10

47This is a cost-effective solution to reduce the time users spend reading documents without losing general user comprehension problems. The summary helps users to easily decide its relevance to their interests and acquire the desired documents with less mental load. Since the research in this area is still in its immature stage, there are many things to investigate in the future. One issue worth highlighting is the widespread use of disparate metrics. It is found that there is no standard human or automatic evaluation metric in the text summary to compare different systems and establish a baseline. Therefore, in the future, to increase the accuracy of the decision, we plan to perform the following evaluation as proposed in [29]: 1) Quantitative measures involve categorization of decision relevance, summary time, and abstract length and 2) Qualitative measures: these involve user preferences and detailed feedback on why the abstract was or was not acceptable for a given task. 6. ACKNOWLEDGMENTS We would like to thank the anonymous reviewers of the conference for their valuable comments. We thank Dr. Lai Weng Kin of MIMOS, Malaysia; Professor Dr. Alan Oxley and Dr. Mohd Nordin Zakaria from Universiti Teknologi PETRONAS, Malaysia for their suggestions that helped to improve this work. We are indebted to the four Malay language experts for their efforts to manually summarize the given documents on time. 7. REFERENCES [1] M. Kaili and M. Pilleriin, EstSum - Abstract Text from Estonian Newspaper, in Proceedings of the 2nd Baltic Conference on Human Language Technologies, [2] D. Hercules, Abstract Text for Swedish, in Report TRITA-NA-P0015, [3] C. Jaruskulchai and C. Kruengkrai, A Practical Text Summarizer by Paragraph Extraction for Thai, In the Proceedings of the 6th International Workshop on Information Retrieval with Asian Languages, vol. 11, 2003, pp [4] M. Mazdak, FarsiSum A Persian Text Summarize, Master's thesis, Department of Linguistics, Stockholm University, [5] H. Dalianis, SweSum A Test Summarizer for Swedish, Technical Report TRITANAP0015, [6 ] H. Eduard and L. Chin Yew, Automated Text Summarization in SUMMARIST, In Proceedings of the Workshop on Intelligent Scalable Text Summarization, 2000, pp , [7] H. Sagiion and G. Lapalme, Generating indicativeinformative summaries with sumum, Computational Linguistics , vol. 28, No. 4, 2002, pp. [8] M. A. Hearst, Untangling Text Data Mining, in Proceedings of the Computational Linguistics Association, vol. 37, 1999, pp [9] T.M Chang and W.F Hsiao, A Hybrid Appraoch to Automatic Text Summarization, in Proceedings of the 8th IEEE International Conference on Informatics and Information Technology, [10] I, Mani, Automatic Summarization, John Benjamins, [11] HP Luhn, The Automatic Creation of Literature Abstract, at the IRE National Convention, 1958, pp. [12 AM. Diola, J.T.T.O López, P.F. Torralba, S.So and A.Borra, Automatic Text Summarization, In Proceedings of the 2nd National Natural Language Processing Research Symposium, [13] S.Y.Tai, C.S.Ong and N.A.Abdullah, On Designing an Automated Malaysian Stemmer for the Malay Language, En the Proceedings of the 5th International Asian Language Information Retrieval Workshop, [14] A. Selamat and K.B. Yee, Web-based automated essay marking system for Malay historical texts using the nearest neighbor technique, in Proceedings of the International Conference on Knowledge Management, [15] A.T.Kwee, F.S.Tsai and W.Tang, Sentence-Level Novelty Detection in Engliah and Malay, Artificial Intelligence Lecture Notes, Vol. 5476, 2009, pp. [16] L.C. Tong, English-Malay Translation System: A Laboratory Prototype, in the COLING proceedings, [17] T. Baldwin and S.Awab, Open Source Corpus Analysis Tools for Malay, in the proceedings of the 5th International Conference on Language Resources and Evaluation , [18] WordNet is available at [19] K. R. McKeown and D. R. Radev, Generating Summaries of Multiple News Articles, In the Proceedings of ACM Special Interest Group on Information Retrieval (SIGIR), [20] M. Hassel and N. Mazdak, FarsiSum A Persian Text Summarizer, In the Proceedings of the 20th International Conference on Computational Linguistics, [21] H.P Edmundson, New Method in Automatic Extracting, In Journal of the Association for Computing Machinery, Vol 16, No. 2, 1969, pp [22 ] I. Mani, Automatic Abstract, Amsterdam: John Benjamins Publishing and Co., [23] I. Mani and M.T. Maybury, Advances in Automatic Text Abstracting, MIT Press, [24] M. Shigeru and Y. Kazuhide, Some Research Topics and Future Prospects in Text Abstracting. Statistical method, paraphrasing and more, in Joho Shori Journal, Vol.43, No.12, 2002, pp [25] R. Barzilay and L. Lee, Learning to Paraphrase: An Unsupervised Approach Using Multiple-Sequence Alignment", at Proceedings of the Human Language Technology Conference of the North American Chapter of the Association for Computational Linguistics (HLT-NAACL), [26] CY Lin and E. Hovy, Manual and Automatic Evaluation of Summaries, in Workshop Proceedings ACL-02 on Automatic Summarization, 2002, pp [27] E. Hovy, Text Summarization, The Oxford Handbook of Computational Linguistic, Oxford University Press, 2005, pp [28] M. Hassel, Exploitation of Named Entities in Automatic Text Summarization for Swedish, In Actas from the Nordic Conference on Computational Linguistics, [29] T. F. Hand, A proposal for task-based evaluation of text abstraction systems, in Proceedings of an Association of Computational Linguistics Workshops, 1997, pp.

48EEG Signal-Based Clustering of Stimulated Emotions Using the Duffing Oscillator Pavel BHOWMIK Department of Electronic and Telecommunications Engineering, Jadavpur University, Calcutta, West Bengal, India. P. Bhowmik, S. Das, D. Nandi, A. Chakraborty, A. Konar, A. K. Nagar P. Bhowmik, S. Das and A. Konar are in the department. of Electronics and Tele-Communication Engineering., Jadavpur University, Calcutta-32. ( s: D. Nandi is at the Calcutta Institute of Engineering and Management, Cacutta-24 ( A.Chakraborty is currently in the Department of Computer Science and Engineering, St. Thomas College of Engineering and Technology, Calcutta-23 She is also a Visiting Professor at the ETCE Department, Jadavpur University (aruna_stcet@rediffmail.com) A. K. Nagar works at the Department of Computer Science, Liverpool Hope University , Liverpool, UK (nagara@hope.ac.uk) ABSTRACT The Duffing oscillator is well known for its chaotic behaviour.This article aims to bring together the emotions of the EEG response to external audiovisual stimuli used to excite a subject The EEG signal corresponding to a specific emotive stimulus is used as excitation input for a non-linear Duffing oscillator dynamics, and the phase trajectory diagrams of the two state variables of the oscillator dynamics show significant differences for various stimuli. of emotional arousal. Experimental investigations reveal that injection of Gaussian noise with a signal-to-noise ratio as low as 25 dB retains emotion clustering results, indicating the robustness of clustering. Furthermore, with different preclassified audiovisual stimuli responsible for the arousal of a specific emotion, the phase portraits obtained from subject EEG data have substantial similarity, indicating accuracy in clustering. Keywords: Duffing Oscillator, EEG, Clustering of emotions and Gaussian noise. 1. INTRODUCTION Perception involves the interpretation of images, sounds, smells and touch. Perception is a relatively younger discipline within Artificial Intelligence, and we fear that there will be less work on the perception of emotions. Researchers, however, are interested in developing new models and techniques to understand and recognize emotions from external manifestations, such as crying, laughing, etc. This article deals with the classification of emotions provoked by audiovisual stimuli from electroencephalographic (EEG) signals. Biologists believe that most of our high-level comprehension process related to emotions is due to the interplay of neural and hormonal activities. EEGs representing the neural activities of the brain could help us better understand human emotions than other widely used modes such as facial expression [1], [8], [17] and voice [2], [6], [ 17]. In recent times, researchers have begun to pay attention to electroencephalography (EEG) [19], functional magnetic resonance imaging (fmri) [11], [18], positron emission tomography (PET) [18] and magnetoencephalography (MEG) [7]. ] information based on correctly determining the emotional response to an external stimulus. Unfortunately, however, very little of the functioning of the brain could be identified up to this point and, consequently, almost no interesting results on emotion clustering from previous modes of information extraction have been reported so far. The main objective of this work is to classify the emotion of a subject from its EEG signal, obtained through the subject's audiovisual arousal. In our initial research [4], [5], we classified input stimuli based on their arousal power on a specific emotion. We used these stimuli in the present experiment, and we would like to examine whether the stimulus used for the arousal of the same emotion would ultimately map the EEGs into a unique pattern. To examine the similarity between EEG patterns corresponding to a specific emotion excitatory stimulus, we employ a Duffing oscillator and record the response of the oscillator to the EEG signal as excitation input. The phase trajectories are constructed with two state variables of the oscillator dynamics, and similarity in chaotic behavior is observed in the phase trajectory for similar stimuli. This fundamental observation reveals that the EEG obtained to arouse a specific emotion has a unique characteristic. Therefore, the classification of emotions by EEG signal should result in good accuracy compared to other traditional means of grouping emotions from voice and facial expressions. The document is classified into 5 sections. In section 2, we briefly describe the state space representation of the dynamics of a Duffing oscillator and how the phase trajectories were derived from the time response of the oscillator dynamics. In section 3, we represent the experimental results for emotion clustering by 12

49observing similarity in the phase trajectory. The effect of noise on the EEG signal is studied in section 4. The conclusions are listed in the section THE DYNAMICS OF THE DUFFING OSCILLATOR AND THE PHASE RESPONSE In this section, we propose a specialized nonlinear oscillator dynamics, which has chaotic behavior. verified [3], [13], [14], [16] in their temporal response. The dynamics of the Duffing Oscillator have a similarity to the typical spring-mass charging system of a conventional mechanical process [12], [15]. However, the spring in the present context, being a non-linear device, has a restoring force proportional to its cubic linear displacement. Naturally, the ideal spring restoring force that obeys Hooke's Law also holds in the dynamics of the Duffing Oscillator. Consequently, the restoring force has two components, one following Hooke's Law, while the other is due to a condition of high spring stiffness, represented by a cubic displacement term. The dynamics of the Duffing oscillator is given in equation (1). (1) where, x represents the linear displacement, represents the velocity of a unit mass connected in a mass-spring charging system, βx and αx 3 are due to the restoring force of the spring, cos is a fixed excitation input to maintain a certain level of oscillation in the response of the dynamics, and e(t) is the input disturbance to the oscillator. In this current context, we use the EEG signal as disturbance input e(t). We take α=1, β=-1, γ=0.826, δ=0.5 and the gain of the EEG signal at 5. The basic dynamics of the Duffing oscillator (1) can be equivalently represented by (2) and (3). (2) (3) Initially, the EEG signal, which was obtained in a sampled version, was passed through a first-order retention circuit, whose transfer function is given by: (4) where T= time of sampling, s= Laplace -domain operator. The hold circuit is used to obtain a continuous version of the discrete EEG signal. Then, a Runge-Kutta algorithm was used to solve the coupled differential equations (2) and (3), and the phase portraits for x against y at different time intervals are plotted. A typical phase portrait for an initial value of x(0)= 2 and y(0)= 20 is given in Fig. 2.a for convenience. Since the Duffing Oscillator has non-linear dynamics, as shown in the block diagram in Fig. 1, it is evident that for variable initial conditions, the phase portraits could have had different shapes. However, experimental instances reveal that a chaotic response of the dynamics prevails even to redefine new initial conditions. Figures 2.b and 2.c illustrate this behavior with different initial conditions. Fig. 1: Block diagram of a Duffing oscillator Fig. 2.a: Rage phase trajectory with initial condition at x(0)= 2, y(0)= 20. Fig. 2.b: Duffing phase trajectory anger with initial condition at x(0)= -2, y(0)= 20. Fig. 2.c: Phase trajectory of anger with initial condition at x(0)= -2, y(0)= GROUP OF EMOTIONS FROM EEG SIGNALS USING DUFFING OSCILLATOR The EEG signal continuous in time obtained through the first order retention circuit is used to excite the Duffing oscillator, [9], 13

50[10], [14], [19] and the oscillator response is obtained by solving the differential equation using the Runge-Kutta method. The experiment was carried out with 15 audiovisual stimuli, each of which 3 correspond to excite a specific emotion. Below is a brief description of the principles of automatic identification of the best audiovisual stimulus, appropriate for the excitation of a certain emotion. In order to identify the correct audiovisual stimulus responsible for the activation of a certain emotion, we manually classified the stimulus with the help of 50 observers, most of whom are university students and faculties. Each observer was asked to classify a given audiovisual stimulus into 5 emotion classes: anger, fear, joy, relaxation, and sadness. He used a 100-point scale, and assigned an individual score to the entire space of possibilities of 5 emotions, such that the sum of the scores assigned to a given audiovisual stimulus equals 100. For 50 observers, we determined the mean and the variance of their assignments to a particular emotion-prone category, and assess the mean/variance ratio for each of the 5 emotions. The emotion that has the largest mean/variance ratio is considered the best category for a given stimulus. The experiment was repeated for 50 of these stimuli and the mean/variance ratio of the winning emotion for each stimulus was identified. A ranking algorithm is then applied to rank them in descending order of their mean/variance measure in the specific emotion category. The first 3 stimuli for each emotion category are then identified from the list. The entire experiment was carried out with these 3 stimuli responsible for the excitation of a specific emotion. Consequently, for 5 emotions, we have 5 3=15 best selected audiovisual samples. Table I gives the tabular representation of the results obtained by the responses of 50 subjects, each of whom was shown 60 audiovisual stimuli. From the table it can be deduced that the sum of the rows in table I is always 100. TABLE I: EVALUATION OF THE ACTIVATION POTENTIAL OF SELECTION OF AUDIOVISUAL FILM CLIPS TO EMOTE DIFFERENT EMOTIONS Subjects used to access the emotion aroused by audiovisual clips Title of audiovisual clips Percentage of arousals of different emotions by one clip Anger Relax Joy Sad Fear Subject 1 Clip Subject 2 Clip Subject 50 Clip Subject 1 Clip Subject 2 Clip Subject 50 Clip Subject 1 Clip Subject 2 Clip Subject 50 Clip TABLE II: EXPERIMENTAL RESULTS FROM THE CLUSTER -SHAPES Emotions and comments on related phase plots Anger The phase trajectory covers the smallest area and width, confined between = -8 to = 8. Fear An extension is visible to the right side of the main phase trajectory. Happiness The upper lobe is very dispersed, less dense, thus covering the maximum surface area. Relaxation A thick lobe forms above the upper lobe. An extension is formed closer to the left side of the original upper lobe. Sadness Below the lower lobe, another lobe is formed, which is scanty in nature. Cluster shapes obtained experimentally (without noise) 14

51Phase trajectory clustering We performed two different experiments for emotion clustering in EEG space. First, different audiovisual stimuli were used to excite a specific emotion of a subject, and the Duffing oscillator response was obtained with the initial condition x = 0 and y = 0 of its EEG signal. Table II above offers a comparative study of the phase portraits of x against y, formed due to one of the selected audiovisual stimuli for each of the emotions. We note that for three stimuli responsible for arousing the same emotion, the phase trajectories appeared almost similar, indicating the fundamental truth that similar arousals to arouse a given emotion are responsible for arousal of similar brain activities belonging to an EEG response. similar. These EEGs, when fed to a Duffing oscillator, maintain the similarity in these portraits of the oscillator state variables. Figures 3.a, 3.b, 3.c, 4.a, 4.b, 4.c, for example, demonstrate similarity in the phase portrait for the arousal of the fear and relaxation emotions on their stimulus list. stipulated. Fig. 3.c: Phase trajectory for Fear due to the 3rd Fear stimulus Fig. 4.a: Phase trajectory for Relaxation due to the 1st Relaxation stimulus Fig. 3.a: Phase trajectory for Fear due to the 1st Fear stimulus Relaxation Fear Fig. 4.b: Trajectory of the Relaxation phase by 2nd Relaxation stimulus Fig. 3.b: Trajectory of the Fear phase by 2nd Fear stimulus 15

52Fig. 4.c: Phase trajectory for Relaxation due to the 3rd Relaxation stimulus 4. EFFECT OF NOISE ON THE EMOTION CLUSTERING OF THE DUFFING OSCILLATOR RESPONSE In this section, we experiment adding noise to the original signal corresponding to a specific emotion , and we observe the changes in the phase portrait obtained from the response of the Duffing oscillator. It is interesting to note that when the signal-to-noise ratio of the EEG signal is maintained at a level of 25 dB, the phase portraits maintain similarity, indicating robustness in emotion clustering. Figures 5.a, 5.b, 5.c, 6.a, 6.b, 6.c demonstrate the behavior in the phase portrait for different levels of signal-to-noise ratio, as indicated in the figure caption. . It is also noteworthy that when the signal-to-noise ratio falls below a threshold, misclassification begins, noting differences in phase portraits for a given emotion. Fig. 5.b: Phase trajectory for Anger when the EEG signal is corrupted by noise of SNR 25dB Fig. 5.c: Phase trajectory for Anger when the EEG signal is corrupted by noise of SNR 20dB Fig. 6.a: Phase trajectory for Joy when the EEG signal is corrupted by noise of SNR 30dB Fig.5.a: Phase trajectory for Anger when the EEG signal is corrupted by noise of SNR 30dB 16

53Fig. 6.b: Phase trajectory for Joy when the EEG signal is corrupted by a noise of SNR 25dB Fig. 6.c: Phase trajectory for Joy when the EEG signal is corrupted by a noise of SNR 20dB 5. CONCLUSIONS The article attempted to group emotions from stimulated EEG signals using the Duffing Oscillator as a means. EEG signals elicited by an emotion-specific excitatory stimulus were supplied as input to the Duffing oscillator, whose phase portrait corresponding to the response is plotted. The similarity in the phase portraits is considered EEG clustering in phase space. Consequently, emotion clustering can be performed by determining the similarity of EEG signals. A performed noise analysis reveals that emotion clustering can be clearly visualized in the phase portrait, as long as the signal-to-noise ratio is kept above a prescribed threshold (25 dB). It is also obtained from the experiments that the excitations responsible for arousing a specific emotion have similar EEG signals, which can be easily clustered in phase space from the Duffing Oscillator response. In summary, the similarity in the chaotic behavior of the phase portraits resembles the similarity in the EEG and, consequently, the similarity in the emotions. The document thus opens up a new methodology for grouping emotions from EEG signals in phase space. REFERENCES [1] Adolphs R, Damasio H, Tranel D, Damasio AR., Cortical systems for emotion recognition in facial expressions, Journal of Neuroscience 1996;16:7678±87. [2] Bezooijen Rv, The Characteristics and Recognizability of Vocal Expression of Emotions, Walter de Gruyter, Inc., The Netherlands, [3] Cristian Bonatto, Jason A. C. Gallas and Yoshisuke Ueda, Chaotic phase similarities and recurrences in a damped-driven Oscillator Duffing, Phys. Rev. E 77, (2008). [4] Aruna Chakroborty, Cognitive Cybernetics A Study of the Behavioral Models of Human, Interactions, Ph D Thesis, Jadavpur University.2005 [5] Aruna Chakroborty and Amit Konar, Emotional Intelligence: A Cybernetic Approach, Springer [6] Dellaert F. , Polzin T., Waibel A., Emotion Recognition in Speech, Spoken Language, ICSLP 96. Proceedings. Fourth International Conference on, Volume: 3, 3-6 Oct. Pages: vol.3. [7] Dolan RJ, Heinze HJ, Hurlemann R, Hinrichs H. Magnetoencephalography (MEG) determined temporal modulation of visual and auditory sensory processing in the context of classical face conditioning. Neuroimage 2006;32(2): [8] Ekman P., Friesen W. V., Facial Action Coding System: A Technique for Measuring Facial Movement, Consulting Psychologists Press Palo Alto, California, [9] P. Holmes, A nonlinear oscillator with a strange attractor, Philosophical Transactions of the Royal Society A, 292, [10] P. Holmes and D. Rand, Phase portraits and bifurcations of the nonlinear oscillator, International Journal of Non-linear Mechanics, 15, , [11] Tom Johnstone, Carien M. van Reekum,1 Terrence R. Oakes, and Richard J. Davidson, The voice of emotion: an FMRI study of neural responses to vocal expressions of anger and happiness, Social Cognitive and Affective Neuroscience, vol. 1, Issue 3, Pp, October 20, [12] Benjamin C. Kuo, Automatic Control Systems, John Wiley & Sons Inc. [13] H. Nakano, T. Saito, Grouping synchronization in a pulse-coupled network of chaotic spiking oscillators, IEEE Trans. Neural Network, Vol. 15, no. 5, pp, September [14] S Novak, RG Frehlich, Transition to Chaos in the Duffing Oscillator, Physical Review A, 1982 APS [15] K. Ogata, Modern Control Engineering, Fifth Edition, Prentice Hall [16] E. Ott , Chaos in Dynamical Systems (2nd edition), Cambridge University Press, [17] James A. Russell, Jo-Anne Bachorowski and José-Miguel, Facial and Vocal Expressions of Emotion, Fernández-Dols vol. 54: (volume publication date February 2003) (doi: /annurev.psych) First published online as Review in Advance on October 4, 2002 [18] F. Schneider, W. Grodd, R. E. Gur , U. Klose, A. Alavi, and R. C. Gur, PET and fmri in the study of emotion, Psychiatry Research: Neuroimaging, Volume 68, Numbers 2-3, February 7, 1997, pages [19] Ye Yuan, Yue Li , "Study on EEG Time Series Based on Duffing Equation", bmei, vol. 2, pp , 2008 International Conference on Biomedical Engineering and Informatics,

54A Study of Dynamic Adaptation Techniques Jorge Fox, Siobhán Clarke Lero - The Irish Software Engineering Research Center Distributed Systems Group School of Computing and Statistics Trinity College Dublin, Ireland {firstname.lastname}@cs.tcd.ie Abstract The increasing complexity of software systems, as well as changing conditions in the operating environment, demand systems that are more flexible and reliable. One possible solution we are considering is the use of mechanisms to effect behavioral improvements or changes to existing systems. This has been called Dynamic Adaptation (DA). This involves exploring a number of challenges. Some questions that need to be addressed relate to finding mechanisms for: service discovery, implementation of behavior changes during runtime, service interaction, and service behavior modification. This paper presents a survey of approaches to dynamic adaptation in order to assess their capabilities. We describe a framework for comparing adaptive (dynamic) (AD) approaches and evaluate selected AD approaches against this framework. Based on the comparison framework, we describe the current trends in DA technologies. Keywords: software engineering, dynamic adaptation, software and systems development, runtime systems 1 INTRODUCTION Dynamic adaptation (DA) is gradually becoming a key element in software engineering for a growing range of domains, such as such as: automotive systems, web services, networks, among others. Furthermore, within these domains, the requirement to adapt to changing environmental conditions, as well as the need to implement additional services on heterogeneous platforms, motivates the use of technologies that facilitate a higher level of adaptation to changes. A review of the state of the art on AD reveals open areas of research. Consider, for example, dynamic time-limited runtime systems. As will be explored later in this paper, DA within time limits and without feature interference is a field of research in which no conclusive results have been achieved. Naturally, there are a number of approaches to adaptation, but at the same time most are static. More importantly, the flexibility of adaptation or the degree to which adaptations are achieved is, in most cases, limited. Furthermore, in many existing DA frameworks, adaptation is achieved through parameterization or reconfiguration, which can lead to limited solutions with respect to flexibility and limit future adaptations. The relevance of DA lies in the growing need for flexible and reliable systems in complex environments. They are environments characterized by the need for: ubiquity, distributed systems, interoperability; as well as controlled and predictable adaptation mechanisms. 2 DYNAMIC ADAPTATION In this section we explore current concepts and definitions related to AD. Adaptability is defined as the ability of software systems to withstand changes in their environment. As Yan et al. mention that a software system will be adaptable as long as its software architecture is adaptable in the first place [18]. Adaptive systems are those that have the ability to adapt at runtime to react to user needs, system intrusions or failures, changing operating environment, resources, and performance variability. We consider dynamic adaptive systems to be a subset of wrt adaptive systems. adaptation time. Dynamic adaptive systems perform run-time adaptations rather than design-time adaptations. In this sense, [6] and [9, 10] introduce a comprehensive review of adaptability and adaptability. In Section 4, we drew inspiration from their ranking to develop our framework for comparison. However, the emphasis of our work is on dynamic adaptive systems. 18

553 COMPARISON OF APPROACHES To classify groups in AD, we first analyze the possible lines of research in adaptive systems. This means, the extent to which a system adapts to changes in the environment, either through structural means i. eg, architectural adaptation, changes in the parameterization of the system, or a combination of both. Another set of criteria that we found is related to the degree of anticipation of changes. In other words, the extent to which adaptation reacts to changes in the environment: totally unforeseen or foreseeable changes. Clearly the former is difficult to conceive of and even more difficult to implement in its pure form. Second, we classify adaptability according to characteristics we identify as relevant to adaptive systems, such as: degree of anticipation, extent of adaptive changes (i.e., architectural vs. localized), whether achieved with composition mechanisms or by parameterization and if there is tool support or not. Equally important, some authors (see [6]) consider the relationship between what is called compositional as opposed to parametric adaptation, and mixed forms. We consider both as two dimensions in the classification, which can be combined. Our ranking criteria are explained in more detail in Section 4. Third, the ranking criteria and approaches we discussed are depicted in Table 1, in which we assign values ​​(ranging from low to medium to high) to the research teams surveyed for each criterion. Value assignment was based on a review of the literature and available information. Furthermore, our classification scheme draws on [6], in particular the distinction between compositional adaptation and parametric adaptation, and anticipated versus unanticipated adaptation. Table 1 shows the classification criteria that we propose for adaptive (dynamic) systems. Some criteria can be combined, while others cannot. Take, for example, achieving the adaptation through a high level of parameterization and localized in one or two precise modules, this is relatively simple and is present in most approaches. On the other hand, some combinations may not be achievable, such as having full anticipation, which means fully anticipating environmental changes, and achieving it at runtime. Therefore, at this stage of our research, the results indicate that these criteria are interdependent. Still, it is a work in progress to identify the scope and properties of such relationships. 4 CLASSIFICATION CONCEPTS FOR AD We briefly present the classification concepts that we propose to describe current research approaches in AD. 4.1 Unanticipated adaptation This concept indicates the degree to which adaptation triggers and possible adaptation needs are known in advance or not. The higher the level of adaptation to unforeseen changes, the higher the level of the framework in this parameter. We believe that a higher level of adaptation to unforeseeable changes indicates a more flexible or generic adaptation framework. 4.2 Scope This concept refers to the extent to which adaptation changes are spread throughout the software system. We assign values ​​from smallest to largest according to the following. If the adaptation is limited to a localized component, the approach gets the low scope value, if the adaptation is performed on a small number of components, it is classified as medium level, and finally, if the adaptation reaches a level of the entire system, then it is considered high in the scope of adaptation. 4.3 Parametric adaptation This criterion indicates whether adaptation is achieved by adjusting or refining predefined parameters in particular software entities, such as components, services, or methods. Higher parameterization can indicate a rather inflexible framework, due to a greater reliance on predefined values ​​and parameters. 4.4 Compositional This classifier means that the framework under analysis achieves adaptations through the insertion or replacement of functional units. By functional units we mean components or sets of components or services. A compositional approach is generally based on linking and unlinking mechanisms. 4.5 Tools We also consider whether the approach has tools to support dynamic adaptive systems, such as a development environment or a runtime monitoring environment. This is the last classification criteria provided in Table 1. We believe this criterion is of relatively high importance given the need to facilitate adoption of the approach or framework. In the following section, we present the research teams that we consider sufficiently representative to explore our classification criteria. The selection is based on a comprehensive review of the literature and the subsequent selection of teams that had relevant publications in the field. We also privilege those teams that work within a consortium of universities and 19

56institutions, or a research group established in academia. The aim of our survey is to explain our classification concepts and identify important features in the field, rather than to introduce a comprehensive review of DA approaches. 5 ADAPTATION TECHNIQUES We identify three main groups of adaptation techniques, these are dynamic binding and unbinding of selected components, use of generic interceptors, and reconfiguration techniques. These techniques were selected after reviewing the literature and analyzing current techniques for their adaptation. The following DA technologies represent an overview of various methodologies, methods, and techniques in the field. Naturally, there are other approaches besides the ones we selected, however we consider this selection to be a sufficient basis for comparisons. In this paper we consider technologies that achieve DA by compositional adaptation, but also technologies that achieve adaptation by reconfiguration or through interceptors. For an exhaustive list of adaptive frameworks, see [6]. Also, we do not include in this survey approaches that focus on very particular issues like hypervisor modules [12], or that focus on particular DA issues like interoperability [7]. 5.1 Dynamic binding and unbinding of selected components This technique is used by the Extensible Service-Oriented Component Framework (ipojo) [4]. In ipojo, controllers inject Plain Old Java Object (POJO) components into the base component. These controllers manage the publication and provision of services, as well as dependencies. When a service meets the given dependency conditions, it is published; otherwise, it is ignored. Components relate to each other by connecting through these dependencies. Components become invalid when a service provider (dependency) disappears. Therefore, creating or activating components is equivalent to disclosing their dependencies, while deactivating a component is achieved by removing dependencies. Generally speaking, ipojo consists of a component model that injects plain old Java objects (POJO s) at runtime. This is the general mechanism through which systems in this approach adapt. This is mainly done through dependency management and service provision, while the business logic is established at the POJO s level. DA is then implemented using dependency redirection; this is managed by controllers which in turn are selected by metadata indicated in XML files. A component container handles all service-oriented computing aspects and separates them from the business logic that remains in the base component. ipojo provides a component runtime environment that simplifies the development of applications on top of the platform provided by the Open Services Gateway Initiative (OSGi). OSGi is a technology intended to facilitate the interoperability of applications and services through a component integration platform [3, 4]. The service concept used in ipojo is quite abstract and seems closer to performance in a broader sense. This approach makes the implementation dependent on the underlying service runtime framework, making this work moderate in scope for porting. ipojo provides a high level of compositionality as well as dynamism with respect to injecting, attaching, and reattaching components or POJOs. While, at the same time, the scope of the adaptation is determined by the underlying framework and its availability, which poses limitations to integration with services or components that do not run on OSGi. Another technique is represented by PCOM. PCOM is a distributed application model that supports DA through signaling mechanisms and adaptation strategies, see [2]. In PCOM, components are entities that interact with each other to fulfill their dependencies. This definition of components resembles that of services, but services are more explicitly intended to cooperate, if necessary, to accomplish their own functionality. Applications in PCOM are described by a tree of components and their dependencies, with the root component being some kind of mainprogram() or application identifier. However, it is not clear from [2] whether the dependencies only occur following the branches of the tree or if other relationships are allowed and to what extent these dependencies are transitive. Besides that, the authors acknowledge that arbitrary graphics would cause complications. This can be seen as a limitation in the framework. For the reasons mentioned above, we can consider that PCOM is more parametric than compositional. Since some adaptation strategy must be established previously, a medium level of unanticipated adaptation is achieved. The framework is not as dynamic as ACT (see Section 5.3), it still claims to support runtime adaptation, so we consider it highly dynamic as well. Another group of techniques, closely related to the dynamic linking and unlinking of components, proposes the use of composition frameworks, filters, paths and injectors. In this category we find a technique that introduces the use of injectors. Injectors in The Object Infrastructure Framework (OIF) offer a way to make it easier to evolve and create distributed systems. Its main mechanism is to inject behavior into the communication path between components [5]. Behaviors can be injected on the client or on the server. Instances and methods can have a different sequence of injectors. The stubs can be changed during execution promoting the dynamic behavior of the system. There is a high-level specification language and a compiler to support 20

57ACT DAiSI Dynamic ipojo MADAM MBD PCOM Concept/ Approach (CORBA) TAO DA medium low low low Unexpected adaptation Scope low low medium low medium low low Parametric low medium medium medium medium Composition medium low medium medium low low Low Tools low low low low Table 1 Evaluation of selected research approaches for OIF adaptation. OIF injectors work with Common Object Request Broker Architecture (CORBA) stubs with some modifications to the skeletons to get the injector sequence for each method. The injector can modify the target, the operation arguments, annotations, and the return value. You can also invoke other remote calls. Injections can perform actions before and after the server action. This allows you to modify the flow of control. In OIF, components are black box objects. Injectors are created by two classes, the injector itself and a factory that instantiates the injector. Injector instances are created by calling the factory when building CORBA proxies. The injectors are then inserted into the methods using an aspect-oriented programming language. Client-side injectors can change the destination of a request. There are different types of injectors: rebind, impatient, insecure, mediator, and balancer. These differ in the decision criteria to select the destination services. To determine the destination of a redirect, the injector can rely on a dependent who has information about the alternatives offered by the destination services. Employees can be dynamically organized in case new services are discovered. To optimize this mechanism, employees can group together in a community of employees who share information. 5.2 Dynamic adaptation with aspect orientation Dynamic adaptation with aspect orientation (AO) in Yang et al [19] is performed in two phases. In the first phase adaptation points are defined and in the second phase the adaptation infrastructure is related to the base program. As Yang et al say, the adaptation infrastructure consists of a adaptation manager and a rule base. Dynamic adaptation is directed through a set of rules. The adaptation kernel is a loose grouping of adaptation managers that are explicitly called to check execution conditions and perform adaptations accordingly. At runtime, an instance of the adapt-ready program is created. The behavior adapters in the running program use a filter chain to trap the respective adaptation manager and determine which rules are met and which corresponding adaptation should be performed. 5.3 Generic Interceptors The use of generic interceptors is used in approaches such as Adaptive CORBA ([16]). These techniques do not modify the behavior of a component, but intercept messages between components to provide additional behavior to perform adaptation. For example, in an Adaptive CORBA template (ACT), generic interceptors register with the Object Request Broker (ORB) of a CORBA application at startup. Interceptors tailor requests, responses, and exceptions that go through the ORB. Therefore, generic interceptors do not modify the behavior of the component. These interceptors have to be pre-registered, which restricts the flexibility of adaptation. See [9]. ACT is a language-independent template that can be used to develop an object-oriented framework as well as enhance CORBA applications [16]. Introduces generic interceptors, which are specialized request interceptors registered with the ORB at startup. Interceptors are either static or dynamic. Dynamic interceptors can be registered or unregistered at runtime, while static interceptors cannot be unregistered with the ORB at runtime. This approach also relies on the notion of fabric to relate dynamic interceptors at runtime. The concept of generic interceptors provides some rationale for unplanned adaptation, as these interceptors are registered without specific behavior and can then be enhanced at runtime to implement some required functionality. For these reasons, we consider this work to achieve a high level of unanticipated adaptation, while achieving only a mid-level adaptation scope, since only the dynamic interceptors are changed. It also has a medium level of parameterization since the use of proxies and redirection is needed, and it is very compositional. 5.4 Reconfiguration Techniques These techniques aim to adjust internal or global parameters to respond to changes in the environment. Reconfiguration can help rearrange items 21

58Aksit and Choukair [1] identify two main research approaches to reconfiguration: adding configuration elements and the use of configuration languages ​​and components. Dynamic reconfiguration [13, 14] aims to achieve component service usage level adaptation, component service implementation, and configuration adaptation. The first type of adaptation supports changing components at runtime and selecting services based on some quality property, for example. The second, admits altering the behavior of a component and the realization of the service it provides. Finally, the third type of adaptation is oriented to reconfigure components in a non-localized way, its objective is to modify how the components are related and how the services offered are activated or stopped. For more information see [8]. It works on the basis of a component model for DA and is based on a formal basis [15]. A related framework is the Dynamic System Infrastructure (DAiSI) [8]. This framework introduces a dynamic adaptive component model that defines how a component should be structured for DA. Our research indicates that, in its current state, DAiSI achieves adaptation through parameterization and composition mechanisms. Anticipating changes seems to be an open topic in this framework, since there is no explicit mechanism to deal with changes and you may not react to unanticipated changes in the environment, but to those indicated by your administrator. configuration components (browser). There is a good level of tool support. Another framework is Dynamic TAO, an extension of The ACE ORB (TAO). TAO is a standard CORBA Object Request Broker (ORB), see [17]. The most prominent feature of DynamicTAO is the ability to reconfigure the ORB at runtime by dynamically binding/unbinding certain components. [11, 17]. It allows remote reconfiguration and replacement of given ORB components without the need to reset the entire ORB, which is a useful feature for DA. It also provides the means to upload code with new implementations, which is also essential for DA. Given its reconfiguration and replacement capabilities, we find it very dynamic. We also consider the scope of adaptation, i.e. the extent to which the system adapts as a proportion of entities with DA capabilities, to be at high DynamicTAO, given that the underlying ORB framework allows, at least in principle, for any of the constituent components be adaptable. Another framework is Mobility and ADaption enablement Middleware (MADAM). This framework provides a component model with plugins for adaptation [6]. With this framework, possible variations of a system are achieved by recursively applying predefined realization plans. Realization plans are actual composition plans or predefined combinations of components provided by the designer. This component model includes an adaptation manager. A composition or adaptation manager is a common mechanism in most adaptive frameworks. In addition, MADAM provides a middleware framework for runtime adaptation with: context management, adaptation management, and configuration management. Its composition is based on parametric adjustments. Also, since fittings are predefined in a fitting plan by a designer, unanticipated fitting is not possible. 5.5 Model-Based Development of Dynamically Adaptive Software (MBD DA) Zhang and H.C. Cheng ([20, 21]) has worked on reliability aspects of DA. The authors introduce an approach to make formal models for the behavior of adaptive programs. In this way, they provide a way to ensure that such adaptations are safe with respect to system consistency. It is based on state machine representations of adaptive programs. The properties that the program must satisfy throughout its execution are called global invariants. Adaptations are defined as adaptation sets and their behavior is represented as simple adaptive programs. The adaptive program properties are local. His method takes into account dependency analyzes of target components, specifically determining viable sequences of adaptive actions and those states in which an adaptive action can be safely applied. This technique supports safe fitting. MBD DA allows the insertion, removal and replacement of components, in response to changing external conditions. His work is explored on the example of a wireless multicast video application. In addition, a secure DA process has been developed in a related project [20]). Its formal framework based on state machines covers static and dynamic analysis. It is also capable of dealing with runtime systems. Your approach can be compatible with different sets of tools (see [20]). It offers a medium level of tool support. There was no stronger evidence of a robust toolset available. This work focuses more on providing a formal framework for analyzing adaptation programs than on the mechanisms or frameworks that support adaptation itself. 6 CONCLUSIONS After a review of a number of DA frameworks and approaches, we highlight the salient features of DA systems; particularly the scope of changes or what we call the scope of adaptations, whether they are made to the underlying framework or to a limited number of DA-pre-enabled components. Likewise, the level of anticipation of changes is an important attribute, since it determines the capacity of the systems to face new services or changes in the environment. In addition, particular adaptation approaches may vary depending on the underlying foundation: components, services, or a combination 22

59from both. Another aspect is that the adaptation mechanisms themselves are sometimes left to the decision of the designers and are specified as parameters on which the system reconfigures or implements the adaptations. In this work, we identify the need for more research on DA mechanisms; which can allow for greater compositionality and flexibility. Some features that we recognize as significant for AD systems are the breadth of adaptation, the mechanisms used to achieve adaptation, and the underlying framework or tools available. Another issue that influences DA is the discovery and replacement of services at runtime, and the decision-making process behind the adaptation. Finally, there is a need for a framework that enables the discovery or replacement of services at runtime, with a runtime environment capable of verifying the reliability of changes and preserving the system's runtime boundaries. software. Specific attributes play a fundamental role in DA, they guarantee the reliability of the adaptations and the preservation of the execution time of the software system after the adaptation, and keep the adaptation time within the predefined time limits. In this sense, our survey reveals open areas of research. ACKNOWLEDGMENTS This work was supported in part by grant 03/CE2/I303 1 from Science Foundation Ireland to Lero, the Irish Software Engineering Research Center (References [1] Mehmet Aksit and Zièd Choukair. Overview and prospective view of dynamic, adaptive and reconfigurable systems.In ICDCS Workshops, pages 84. IEEE Computer Society, [2] C. Becker, M. Handte, G. Schiele, and K. Rothermel. Pcom - a component system for pervasive computing. Pervasive Computing and Communications, PerCom Proceedings of the Second IEEE Annual Conference on, pages 67-76, March [3] Clément Escoffier and Richard S. Hall Dynamically Adaptive Applications with ipojo Service Components In Markus Lumpe and Wim Vanderperren, editors, Software Composition, Volume 4829 from Lecture Notes in Computer Science, Springer pages, [4] Clément Escoffier, Richard S. Hall, and Philippe Lalanda ipojo: An Extensible Framework of Service-Oriented Components In IEEE SCC, IEEE Computer Society pages, [5] Robert E. Filman and Diana D. Lee. Redirection by injector. Distributed Computing Systems Workshops, International Conference On, 0:0141, [6] Kurt Geihs. Self-adaptive software. Informatik-Spektrum, (Print) X (Online). [7] Robert Hirschfeld and Katsuya Kawamura. Dynamic adaptation of the service. Software - Practice and Experience, 36(11-12): , September/October [8] Holger Klus, Dirk Niebuhr and Andreas Rausch. A component model for dynamic adaptive systems. In Alexander L. Wolf, editor, Proceedings of the International Workshop on Software Services Engineering for Pervasive Environments (ESSPE 2007), pp. 21-28, Dubrovnik, Croatia, September ACM. [9] Philip. K. McKinley, Seyed Masoud Sadjadi, Eric P. Kasten, and Betty H.C. Cheng. A taxonomy of compositional adaptation. Technical Report MSU-CSE-04-17, Department of Computer Science and Engineering, Michigan State University, [10] Philip K. McKinley, Seyed Masoud Sadjadi, Eric P. Kasten, and Betty H.C. Cheng. Composition of adaptive software. Computer, 37(7):56-64, [11] Philip. K. McKinley, R. E. Kurt Stirewalt, Betty H. C. Cheng, Laura K. Dillon, and Sandeep Kulkarni. Rapidware: development based on reliable and adaptable middleware components. Technical Report, Michigan State University, [12] Thomas Naughton, Geoffroy Valle, and Stephen L. Scott. Dynamic adaptation using xen:thoughts and ideas in loadable hypervisor modules. First Workshop on System Level Virtualization for High Performance Computing (HPCVirt 2007), March [13] Marie-Claude Pellegrini and Michel Riveill. Component management in a dynamic architecture. J. Supercomput., 24(2): , [14] James M. Purtilo. The polylith software bus. MCA Trans. Program. Language Syst., 16(1): , [15] Andreas Rausch. DisCComp a formal model for distributed concurrent components. Electronic Notes in Theoretical Computer Science (ENTCS), 176(2):5-23, [16] S. M. Sadjadi and P. K. McKinley. Act: An adaptive corba template to support unforeseen adaptation. icdcs, ​​00:74-83, [17] DC Schmidt, B Natarajan, A Gokhale, N Wang, and C Gill. Tao: A pattern-oriented object request broker for distributed real-time and embedded systems. IEEE Distributed Systems Online, [18] Narayanan Subramanian. Generation of adaptive software architecture using the nfr approach. PhD Thesis, University of Texas at Dallas, Supervisor-Lawrence Chung. [19] Z. Yang, B.H.C. Cheng, R.E.K. Stirewalt, J. Sowell, S.M. Sadjadi, and P.K. McKinley. An aspect-oriented approach to dynamic adaptation. In WOSS 02: Proceedings of the First Workshop on Self-Healing Systems, Pages 85-92, New York, NY, USA, ACM. [20] Ji Zhang and Betty H.C. Cheng. Model-based development of dynamically adaptable software. In ICSE 06: Proceedings of the 28th International Conference on Software Engineering, Pages, New York, NY, USA, ACM. [21] Ji Zhang, Betty H.C. Cheng, Zhenxiao Yang, and Philip K. McKinley. Enabling a safe software adaptation based on dynamic components. Reliable systems architecture III, 3549/2005: ,

60Architecture Information resources of the Brazilian Social Security: approaches from Social Sciences and Informatics working together Claudio Jose Silva RIBEIRO DATAPREV Information Technology of the Brazilian Social Security, Rua Prof. Alvaro Rodrigues, 460/303, Botafogo, Rio de Janeiro, RJ , CEP , Brazil ABSTRACT The growing use of Information in virtual environments has expanded the problem of Information Management, as it ended up driving the discussion about its solution towards technology. In this sense, this article presents part of an investigation that proposes a readjustment of focus in this debate making use of epistemology while studying the process of planning, understanding and representation of information resources, with particular emphasis on structuring according to the Information Architecture. The theoretical framework started from the Information Administration and Information Architecture assumptions, as well as Project Management Methodology and Information Systems Development Methodology. The understanding of information needs was developed using the Domain Analysis to delimit the domain, understand and judge the relevance of the information within the field under review. In addition to that, it was complemented with research aimed at users through the use of the Sense-Making approach, originating in the Social Sciences, and Requirements Elicitation, originating in Software Engineering. In dialogue with Information Sciences and Computer Sciences, this article presents the basic definitions that have shed light on the path taken to develop and structure the Dataprev project on the Content Management System for the Brazilian Social Security. Keywords: Information Management, Domain Analysis, Requirements Elicitation, Sense Making, Information Architecture, Social Security, Brazil 1. INTRODUCTION The way in which society uses and treats information has been changing in the last three decades as consequence of the emergence of new economic or technological models. These models have promoted a paradigm shift as important as the invention of the press, or even, as the industrial revolution itself. We notice that information has evolved from a simple and mundane fact, with limited impact, to something that is increasingly becoming knowledge for power. The term power must be understood in its broadest sense, such as using knowledge to obtain economic benefits. Once technology has accelerated the dissemination and exchange of information, the discussion about the treatment of information has catalyzed another process of transformation. The extensive use of information spaces on the Internet (Websites), where data and information are available 24 hours a day, 7 days a week, has further fertilized the process of change already underway. In accordance with a historical vision of information processing, we come to the topic of Information Management (IM) which, in its 24th genesis, sought to deal with the collection and use of human, technological, financial, material and physical resources to information management. . These elements must be observed both with a view to the strategic and operational levels, in order to make information a useful resource and part of the business strategies of individuals, groups and institutions. The information inventory, the costs involved in the management and use of information, beyond the identification of existing information gaps, were aspects considered in the scope of studies on IM, which were enhanced with the incorporation of new technologies. [1]. Levitan, one of the forerunners of the subject, observes: Information management can be seen as the cornerstone of information science and technology. All activities of the eclectic profession, from user requirements, systems design and evaluation to document and knowledge representation, database organization, storage and retrieval techniques, applications of hard and soft technologies , as well as repackaging, dissemination and commercialization, are related to the complex process of information management. [2] In this direction, we note that the efforts to collect, process and recover data, information and documents have also motivated the development of projects in the field of the Brazilian Social Security System. In addition to the means in which data, information and documents materialize, the development of these works is essential, considering that the Brazilian Social Security System acts as a depository of information for Brazilian citizens. The volumes involved in this Project point to a large volume of data, with a variety of types originating from many different sources. Currently, there is a monthly payment of approximately 24 million Benefits, with different Categories of Benefits, each one with different characteristics, demanding a differentiated treatment. This information is stored in very large databases and must be available to Brazilian citizens. In the universe of documents and unstructured information, studies point to a process volume of around 32,000 document movements per day, between administration processes, Benefit processes and other tasks. Considering the average of 15 pages per process, we find a total volume equivalent to 450,000 pages per day [1]. This business scenario, rich and with many nuances, led us to delimit the context of the research, focusing on the use of documents, leading the project to establish some definitions that were useful for the development of the work. We must remember that documents are media whose historical objective is to enable communication and facilitate the storage of knowledge. Therefore, these definitions were able to lead us to understand the activities that belong to the communication process, since the different documents can carry

61several meanings, which allows many interpretations, depending on the context in which they are inserted. The concept of Information can have its meaning associated with the document itself or, even, with the knowledge related to its content, both participating in a specific domain [3]. In this direction, to overcome the challenge faced by the Social Security information environment, we seek support in the process of assembling an Information Architecture. In this theme, the method of developing research related to understanding, with the representation of information needs and the management of such resources, must be conducted according to the phases of [4]: ​​strategic planning; creation; understanding and mapping; capture and collection; selection and treatment. Another approach that guided the study was derived from Choo [5], as it presents a view of information management that supports the steps listed above. This author also suggests that activity be viewed as a network of processes that acquire, create, organize, distribute, and use information in accordance with the Information Management Process Model: Information Needs Information Acquisition Delivery Products Storage and Structuring Use of Information Adaptive Behavior Figure 1: Information Management Process Model [5] According to a historical vision, the definitions of Projects were always more related to the Engineering areas, where there were various projects for the construction of buildings, bridges, airplanes, automobiles , boats and electronic equipment. Today, however, the issue of project development began to create new ties and became part of other areas of knowledge, contributing to the organization of new ventures in various fields of study. This vision of projects was also used in the structuring of the research [1]. Therefore, we assume that a project is a set of activities or measures planned to be executed with a certain execution responsibility to achieve certain objectives within a defined scope, with a limited time frame and specific resources. In a complementary way, we seek support in development processes of Information Systems brought by Computer Science, where the basic cycle of the project is organized by work phases. This cycle may be developed in the phases of [6]: Conception; Elaboration; Construction; Transition. This theoretical reference allowed us to structure the work according to a set of phases and intermediate products that made it possible to start our interdisciplinary path, seeking to overcome the barriers to structuring the information resources of the Brazilian Social Security System. 2. UNDERSTANDING THE USER'S OBJECTS OF INTEREST In accordance with the interdisciplinary vision proposed for the project, we begin our path through some points of support identified as essential to face the challenge of understanding the information needs of users. They are: Domain Analysis, Meaning Elaboration and Requirements Elicitation. In the field of Information Sciences (IS), domain analysis leads to the delimitation and understanding of the information set of a given context, through the understanding of communication patterns and relevance. Some topics in the IS field are studied, such as: the use of strategies for the classification of subjects, vocabularies controlled by users and studies, can be understood as a kind of Domain Analysis [7]. On the other hand, when working with domains, especially in the field of document analysis, it is necessary to highlight the activity of the subjects of analysis of these documents. According to an analytical domain approach, a document can serve different groups of users who can use it for different purposes. Therefore, the topics should not be identified from individual points of view, nor with general or universalized points of view, but should reflect the interests of the groups that use the information systems analyzed in the domain under study [8]. To help delimit the domain, it was necessary to rely on another essential field for the development of the Domain Analysis: the concept of relevance of the information, since it is necessary to clearly identify the limits of the context of the topic under analysis [ 9]. In this direction, relevance comes to have a preponderant role in the processes of acquisition, organization, storage, preservation, communication, interaction and use of information, mainly when said activities are executed and supported by Automated Information Retrieval Systems. According to a still preliminary vision, these systems were developed to respond with information that is potentially relevant to people. Thus, it is possible to discern two worlds that interact: the IT world, with a systemic category relevance; and the world of Social Sciences, with a relevance of the individual perception category [10]. Another point that has helped us in delimiting the domain was the possibility of using some models to measure relevance [11] [12], which made it possible to convert the process of identifying the components of the domain under analysis into a prescriptive one. In the context of Computer Science, however, Domain Analysis is a method used in the development of systems and software engineering, whose main objective is to assist in the reuse of information system components. This work is developed by the Domain Analyst who tries to identify, capture, organize and represent all the relevant information of a domain. This relevant information will be used to develop systems, with the aim of making it reusable when creating new applications [12]. Pietro-Diaz explains that various types of information are generated during the system development process, from the requirements analysis phase to the generation of source code programs. This information is added to the domain.

62knowledge together with the new requirements for the current system, as well as for the identified requirements for the future system. Domain specialists and analysts identify relevant information and synthesize it. With the support of a domain engineer, knowledge is organized and grouped in the form of domain models and with patterns and collections of reusable components. One of the objectives of Domain Analysis is to make all this information available for reuse [12]. The vision of creating instruments to enrich the Domain Analysis approach has gone through the analysis, understanding, formulation and externalization of a situation. This vision led us to incorporate the approach presented by Dervin into our work proposal. In this sense, the interpretation and understanding of the external world must be obtained through observation, since this observation will lead us to the internal cognitive senses related to actions and attitudes. The use of the guidelines proposed by Dervin was an instrument that allowed, according to a more pragmatic vision, to build the logic of a series of daily situations that are frequently in the process of change. These guidelines are listed below [13]: Be aware of differences of opinion: Find ways to think about diversity, complexity, and integrity. However, care needs to be taken not to simplify the domain under review or plunge it into a Tower of Babel; Using metaphors to analyze situations: traveling metaphors through periods of time and space, past experiences contributing to the future, filling in differences and promoting alternative paths to overcome differences, and finally trying to estimate the results obtained; Try to design systems to serve users. Think, ask and talk to users; Conduct interviews to understand the identified dislocations, use metaphors to conduct such interviews with users; Analyze spatio-temporal periods, dislocating individual attention and identifying the person-situation binomial. This relationship is also called a sense-making instance. Try to apply the results and generalizations of the project to instances, not individuals; The third point of support proposed for this stage led us to some authors who have important works on the topic of Requirements Elicitation. In Goguen's view, most of the information desired by requirements analysts is present and available in the social context of users and administrators, and must be extracted through interviews and questionnaires. The systems, functionalities, entities and associations must work in synergy, cooperating to meet the goals established in this context [14]. Kontoya and Somerville present a number of techniques for carrying out requirements research work. Similar to Goguen's, Kontoya and Somerville also point to interviews and documentary analysis, but note that these interviews may need to be supplemented by the use of other investigative approaches. For this, the authors present, among several other techniques, the construction of scenarios that simulate the interactions between users and systems, the use of prototypes to support the experimentation of such 26 scenarios, and the observation process with a social analysis of the context. (called ethnography) [15]. Still in the field of requirements investigation techniques, Robertson and Robertson observe that it is possible to use techniques to build scenarios with the use of documentary analysis 1. These authors also present a technique based on requirements records on snow cards or white cards. , which are delivered to the survey meeting participants. At the end of these meetings, the cards are collected and pooled for analysis with further consolidation and recording of results [16]. Although the perspectives presented to the information comprehension needs do not exhaust the subject, these perspectives have contributed to the formulation of the project and have helped to understand things in the world, delimiting the domain under analysis and allowing the use of prescriptive guidelines that made this work more objective. With the delimited context and the scope of the project established, we proceeded to search for a structuring of the information content. In this direction, the use of Information Architecture brought the basis of our project, since it has illuminated the path to follow, since it presents 3 basic levels to the representation of the processes linked to the information cycle, as shown below (Figure 2): Figure 2: Information Architecture model proposal (adapted from [4]) 3. DEVELOPMENT OF INFORMATION ARCHITECTURE IN A DOMAIN One of the precursors of the development of the topic of Information Architecture Reporting was Richard Saul Wurman. He sought to understand how information was collected, organized, and presented in ways that were meaningful to planners, architects, and engineers. These professionals are in charge of processing the information to use it in projects in urban environments and also for planning transportation routes. With the evolution and development of the subject, Information Architecture had its operational area expanded and also began to be applied 1 The original expression used by Robertson and Robertson is Document Archaelogy.

63to serve users, training them to make use of this organization of information. We can understand that the concept of Information Architecture brings together two very broad terms: Information and Architecture. The first is directed to Architectural themes, or Architecture themes, where, since ancient times, a field of knowledge studies, projects and organizes spaces according to the requirements of its users, always seeking to work with the appropriate measurements and placing the objects. instead. due space with elegance and harmony. This includes looking from a perspective, both geometric and spatial, as well as having symmetry in relation to the whole. Finally, according to economic aspects of its elements and materials [17]. The second is directed to the definition of Information, which in itself gives rise to a specific study since information is a complex object that deserves to be highlighted in any branch of knowledge. To develop the project, information is understood as an input for the generation of knowledge, it is the link between thought and attitudes, and finally, a set of data endowed with relevance and purpose [9]. According to a historical view, the topic of Information Architecture developed from a concept advocated by Zachman [18] and Sowa and Zachman [19], who mainly observed the use of different views in the conception of the architecture of a same product. Evernden [20] extended Zachman's concepts to the information environment, bringing different organizational, business and technological perspectives and created a general panorama to generate the Information Architecture. This map is made up of 8 factors (categories, understanding, presentation, evolution, knowledge, responsibility, process and metainformation), which are built from a diagnosis of the use of information in organizations [21]. To understand the organization of these features and, supported by Evernden and Evernden [21], we proposed the following set of recommendations: To categories: Try to classify and group items by similarity; Use differentiation to categorize domain elements; Break big problems into smaller parcels; It allows to organize and structure the information. To understanding and comprehension: Structuring information and data based on existing understanding; Try to interpret and use the information in innovative ways; Try to discover new meanings, patterns and trends; To the presentation: Try to improve the use and understanding of the information; Try to communicate ideas and messages; Try to persuade the participants by encouraging them to present information scenarios with impact and passion; To evolution: Keep everything relevant updated; Prioritize and control changes; Recognize both changes and new ideas; 27 To knowledge: Codify personal knowledge as a corporate information resource; Apply personal experience and profiles to the information; Learn through feedback and practice; Responsibility: Give an account of the changes and identify those responsible for them; reconcile differences of opinion; Coordinate the efforts that are developed in the organization; Process: Improve the efficiency, effectiveness and productive use of information; Improve the value and reuse of information; Maximize information feedback; To metainformation: Promote a language and grammar for information management; Develop guidelines and patterns to improve the use of information; Promote the use of indexes for the use of information. Continuing with the structuring according to an Information Architecture, but now bringing the influence of technological aspects, we observe in Rosenfeld and Morville [22] the direction of their work towards the structuring and evaluation of the Web environment. Looking at aspects of usability and information retrieval, these authors also describe some guiding principles for the development of content projects on websites. The scheme of an Information Architecture necessarily involves the organization of its elements: context, content and users, all of which are related to the domain under review. They are used to help in the definition of broad categories that can be used in the preparation and framework of Architecture, validating the propositions gathered in Evernden and Evernden. In addition, the guidelines indicated on the work of Hourican [23] characterize Information Architecture as the result of a work process focused on aspects of use, search and recovery. The Information Architecture begins to develop towards the understanding of Structure, People, Processes and Tools. This author also presents a large group of tools that can help in the implementation of an Information Architecture [23]: Content Management Systems A system that facilitates the capture of content, the management and publication of different elements and applications, mainly in web environment; Document Management Systems A system that supports Document Management within companies. It can include document versioning functionalities and use of electronic files, as well as control and manage their storage and dissemination; CRM Systems A system to manage the information of the people who interact with the company (customers, suppliers, partners, etc.); Search engines to retrieve and search for information, based on mathematical algorithms and categorization schemes, with accounting of page accesses; Portals that bring together different content in a unique and personalized interface;

64Electronic commerce and electronic business facilitating the conduct of business and commerce via the Internet; s of management, including the search, classification, retrieval, analysis and statistics of the use of these elements. The contributions collected in Haverty [24] indicated that the proposal for the elaboration of an Architecture is based on high-level goals and objectives, however, the detail of this work must be carried out in an inductive manner and arises from the understanding of the information needs of the user. user. . Haverty also observes that the Information Architecture needs to incorporate representations through diagrams, as a way of facilitating the understanding of the problem. Hence, supplied with an interdisciplinary perspective and with a theoretical framework, we begin the task of creating the common thread that guides us on this path. Therefore, our biggest challenge was to define exactly which domain elements were necessary to design an enterprise's Information Architecture, since users will have to interact directly with many of these components. 4. RESULTS As previously observed, with the justifications provided by these approaches, the project began to be designed and structured. The assembly of these views allowed the union of the stages and specific objectives established in each stage [1]. We must point out that the Logical Design stage is still ongoing and, therefore, its actions were identified as under development. Elaboration of a preliminary model with a graphical representation of the elements of the domain, as well as their associations. Complementation of the understanding of the domain elements using user-oriented approaches. Stage: Logical Design of the Information Architecture Objective: To assist in the structuring and organization of content representation, navigation and meta-information, to enable effective use and retrieval of content. Actions in development: Organization and classification of resources and contents. Identification of the person responsible for the contents. Identification of retrieval and search engine needs Use of metadata to aid representation. Representation of navigation and content. Use of an Information Architecture view. 4.1 THE INTERMEDIARY PRODUCTS GENERATED The use of the Information Architecture led us to a scheme of categories and relationships, which allowed us to develop a project for the storage of the Information Resources of the Center for Process Investigation and Social Information Registry. Stage: Project Planning Objective: Help in the planning and investigation of the reason for the need for this project, trying to find a clear definition of the motivation and definition of the scope of work. Developed Actions: Delimitation of the context, contemplating the Mission, Goals, Objectives, Sponsorships, Policies, Cultures and Technologies. Establishment of the responsibilities associated with the project. Establishment of a communication plan. Identification of both the domain to be analyzed and the users. Defining the approach to investigate the domain. Survey and analysis of previously developed work. Identification of information assets aligned with the organization's business objectives. Figure 3: Information Architecture of the Center Procedural Investigation Stage: Specification of Information Resources Objective: Help in the definition of the work through the understanding of the resources that are generated, delimiting the domain and specifying in detail the elements that will make up this context. Developed Actions: Survey with detail of the content of the domain. Use of the relevance of the information to help in the identification of the elements of the domain Definition of the criteria that will help the users in the measurement of the relevance. 28 Figure 4:. Registry Information Architecture Social Information 5. FINAL CONSIDERATIONS As presented in the introduction to this report, the rise of the Web environment has contributed to an increase in the number of Information Management problems, which caused a change in the approach of the resolution of the problem to the use of technology

Sixty-fiveapproaches. Mostly in environments with a large volume of information, recovery solutions began to be structured based on magical search mechanisms that proposed to bring all the relevant information. From the implementation of this project and with the support of the theoretical contribution presented here, it is expected to reduce the uncertainties of Information Management, contributing to the Content Management process in the Web environment, especially to the set of Brazilian information. . Social security system. 6. REFERENCES [1] RIBEIRO, C. J. S. Guidelines for the information portal project: an interdisciplinary proposal based on the Domain Analysis and Information Architecture. 2008, UFF-Rio de Janeiro. [2] LEVITAN, K. B. Information resource management - IRM. Annual Review of Information Science and Technology (ARIST), No. 17, p , 1982, pp. 227 [3] HJØRLAND, B. Theory and metatheory of information science: a new interpretation. Documentation Magazine, Vol. 54, No. 5, 1998, pp [4] LIMA-MARQUES, M.; MACEDO, F. L. O. Information Architecture: basis for knowledge management. In: TARAPANOFF, K. (Ed.). Intelligence, Information and Knowledge. Brasilia: IBICT/UNESCO, 2006, pp [5] CHOO, C. W. The organization of knowledge: how organizations use information to create meaning, build knowledge and make decisions. 2a. São Paulo edition: SENAC, 2006, pp [6] JACOBSON, I. B., G.; RUMBAUGH, J. The unified software development process. Addison-Wesley Longman Publishing Co. Inc., [7] HJØRLAND, B.; ALBRECHTSEN, H. Towards a new horizon in information science: domain analysis. Journal of the American Society for Information Science, Vol. 46, No. 6, 1995, pp [8] HJØRLAND, B. THE CONCEPT OF SUBJECT IN INFORMATION SCIENCE. Documentation Magazine, Vol. 48, No. 2, 1992, pp [9] RIBEIRO, C. J. S. In search of the organization of knowledge: the management of information in the Brazilian Social Security databases with the use of the Domain Analysis approach. 2001, UFRJ-Rio de Janeiro. [10] SARACEVIC, T. Relevance: A Review of the Literature and a Framework for Thinking the Notion in Information Sciences. Part II: Nature and Manifestations of Relevance. Journal of the American Society for Information Science and Technology, Vol. 58, No. 13, 2007, pp [11] GREISDORF, H. Relevance thresholds: a multi-stage predictive model of how users evaluate information. Information Processing and Management, Vol. 39, No. 3, 2003, pp [12] PRIETO-DIAZ, R. Domain analysis: an introduction. SIGSOFT Software. Eng. Notes, vol. 15, No. 2, 1990, pp [13] DERVIN, B. Theory and practice of meaning making: an overview of user interests in the search for and use of knowledge. Knowledge Management Magazine, Vol. 2, No. 2, 1998, pp [14] GOGUEN, J. A. Requirements engineering as a reconciliation of the social and the technical In: JIROTKA, M. e (ORGS.), J. G. (Ed.). In: Requirements Engineering, Social and Technical Aspects. San Diego: Academic Press, [15] KONTOYA, G.; SOMERVILLE, I. Requirements engineering: process and techniques. West Sussex, England: John Willey & Sons, [16] ROBERTSON, S.; ROBERTSON, J. Mastering the Requirements Process. London: ACM-Press/Addison-Wesley, [17] VITRUVIUS. The ten architecture books. New York: Dover Pub. Inc., [18] ZACHMAN, J. A. A framework for information systems architecture. IBM Systems Magazine, Vol. 26, No. 3, 1987, pp. [19] SOWA, J.F.; ZACHMAN, J. A. Expanding and formalizing the framework for information systems architecture. IBM Systems Magazine, Vol. 31, No. 3, 1992, pp. [20] EVERNDEN, R. The Information Framework. IBM Systems Magazine, Vol. 35, No. 1, 1996, pp. [21] EVERNDEN, R.; EVERNDEN, E. Information First: Knowledge Integration and Information Architecture for Business Advantage. First. Burlington, [22] ROSENFELD, L.; MORVILLE, P. Information Architecture for the World Wide Web. 2nd edition. O Reilly Media Inc, [23] HOURICAN, R. Information Architectures - What are they? Trade Information Review, Vol. 19, No. 3, 2002, pp [24] HAVERTY, M. Information architecture without internal theory: an inductive design process. Journal of the American Society for Information Science and Technology, Vol. 53, no. 10, 2002, pp.

66Home telerehabilitation as an alternative to face-to-face treatment: feasibility in post-knee arthroplasty, speech therapy, and chronic obstructive pulmonary disease Sherbrooke, Québec, J1H 4C4, Canada ABSTRACT The objective is to show three technological innovations used in telerehabilitation at home and the results of pilot efficacy studies. Telerehabilitation systems enhanced with TERAS software, external sensors and camera control. In our experience, the residential Internet network is of sufficient quality to make home teleprocessing feasible. Innovative technologies enhance teletreatment sessions. Telerehabilitation appears to be a practical alternative to home visits by a physiotherapist to deliver rehabilitation services. Keywords home telerehabilitation, teleconsultation, elderly, total knee arthroplasty-tka INTRODUCTION Home telerehabilitation, defined as the provision of remote rehabilitation services to people with persistent and significant disabilities through information technology and telecommunications in their home [ 1], is growing as a complementary or alternative intervention to traditional face-to-face therapy in home care and outpatient services. The rationale for telerehabilitation at home is to expand and facilitate the provision of rehabilitation services to people who cannot access them due to scarcity or lack of access to services, long waiting lists for home care services or problems getting to and from the clinic. [2]. Clinical care that can be delivered through home telerehabilitation encompasses active treatment and follow-up [3] rather than teleconsultation diagnosis and assessment. PURPOSE OF THE DOCUMENT The purpose of this presentation is to show three technological innovations used in telerehabilitation at home. In addition, we present preliminary results on the effectiveness of home telerehabilitation as an alternative to conventional rehabilitation services provided after an acute illness. TECHNOLOGICAL INFRASTRUCTURE FOR TELEREHABILITATION SERVICES Based on the experience of two previous studies [4, 5], a telerehabilitation platform was developed and refined. The platform includes several components to provide a user-friendly experience for both the physician and the patient at home. Although similar in many ways, two different systems were used to deliver telerehabilitation services: a 30

67- The telerehabilitation platform and the software interface for both systems are illustrated in Figure 1. The core of these systems is the videoconferencing system (Tandberg 550 MXP), which uses an h.264 video codec and integrates a pan- tilt-zoom (PTZ) wide-angle camera and an omnidirectional microphone. The system is mounted on a 20-inch LCD screen, which displays the video received from the other end. Audio can be played using external speakers positioned on either side of the screen (internal LCD speakers are rarely enough to provide a satisfactory experience). The clinical system adds a computer to the home system. A software interface (TeRA), running on this computer, provides easy-to-use control and monitoring of video conferencing sessions, camera control, integrated clinical testing, photo and video recording, and support for external sensors and devices [6 ]. The TeRA workflow is depicted in Figure 2. The platform was developed to ensure that interactions between clinicians and clients during telerehabilitation sessions are not hampered by technology, but are instead facilitated by user-friendly interfaces. . A special effort was made to provide a mouse-based interface to intuitively control, from a single screen via point-and-click or area zoom, PTZ camera functions at both sites. This functionality is represented in Figure 3. Figure 1 - Telerehabilitation systems. The components of both systems are identified: A) Videoconferencing system, B) LCD screen, C) Router and modem that connect to the Internet, D) Sensors and external devices, E) Physician's computer and display screen. Video and audio data is encrypted and transmitted over a high-speed Internet connection, allowing communication using a maximum bandwidth of 512 kbps for both upload and download. The system is also resistant to packet loss and ensures that audio and video are correctly synchronized. The home system may include external wireless sensing devices such as oximeters, breathing belts, instrumented soles, and inertial measurement units (Figure 2). These sensors provide additional real-time information to the physician, such as oxygen saturation level, heart rate, and anatomical angles. Bandwidth will vary depending on the number and type of sensors included in the configuration. Figure 2 External sensors. A computer can be included in the home system to accommodate various external sensors. The computer is wirelessly connected to the router, and the sensor network is connected to the computer. Sensors illustrated in the image include: inertial measurement units, breathing belts, pulse oximeters, and instrumented soles. 31

68Figure 3 - TERA software. When the software starts, the login screen A) appears, where access privileges can be controlled, as each user has a unique login ID. On a successful login, the main screen B) is displayed, where the user can select a client and connect easily by double-clicking the connect button, displaying the interface C) which shows the connection process. When the connection is complete, the interface D) displays the remote video and provides control of the camera and E) the completion of the test directly in the software. Method: A pre/post test design with no control group was used for this pilot study. Five community-dwelling elderly who underwent knee arthroplasty were recruited prior to discharge from an intensive care hospital. The telerehabilitation treatments (16 sessions) were performed by two trained physiotherapists from a service center to measure capacity (range of motion and balance) and function (locomotor performance when walking) in face-to-face evaluations before and at the end of the treatments by a neutral evaluator. Health professional and patient satisfaction was measured through the use of questionnaires. Knee Range of Motion Balance Figure 4 - Camera control. Camera control is done entirely with the mouse. With one click, the camera is centered on that point. In A), users select an area with the mouse and release the mouse button. The camera moves and focuses on the area as shown in B. PILOT CLINICAL STUDIES USING INNOVATIVE TECHNOLOGY Patient 3 Post-knee arthroplasty [4] Purpose: The purpose of this study was to investigate the efficacy of home telerehabilitation provided after knee replacement surgery (Total Knee Arthroplasty-TKA). The scenario was a service center linked to the patient's home by high-speed residential internet service. Results: The technology was solid despite some connection drops in the sessions. Indeed, the satisfaction of health professionals with respect to technology and communication 32

69the experience during therapy sessions was similar or slightly less. One participant was lost to follow up, which was not due to the technology. Clinical outcomes improved for all subjects and these improvements were maintained two months after discharge for home telerehabilitation. Participant satisfaction with home telerehabilitation services was very high. three subjects: showed great improvement of trained elements compared to untrained elements Patient 1 SPEECH THERAPY (UNPUBLISHED) Purpose: The purpose of this study was to investigate the efficacy of speech therapy teletreatment for rehabilitation services delivered after a cerebrovascular accident. The setting was a service center and a simulated home (within the service center) using high-speed home Internet services. Patient 2 Technology related to speech therapy: The telerehabilitation platform was adapted for speech therapy treatment. In fact, patients must react to visual cues presented by physicians. Therefore, an interactive computer was inserted into the platform. Method: The design used for this study was a prepost test with a baseline such that the patient is her own control. In-home speech therapy teletreatment was provided over a period of two months. Three patients (two women and one man) who had suffered a cerebrovascular accident (CVA) with language problems were recruited. They were in different stages of their rehabilitation: 2, 6 and 8 months post-stroke. Subjects, depending on the failed items in the recognition task in the assessment, were divided into two groups for treatment: half were trained in home teletreatment and the other half were not trained (control). The comparison between the number of successful items before and after treatment served as the outcome measure. Results: Despite the fragility of the patients, the technology was considered very satisfactory for their treatment. Improved Clinical Outcomes for Patient 3 CHRONIC OBSTRUCTIVE PULMONARY DISEASE (COPD) (UNPUBLISHED) Purpose: The purpose of this study was to investigate the efficacy of home physical therapy teletreatment for rehabilitation services provided to patients with COPD. 33

70Technology related to cardiopulmonary rehabilitation: The telerehabilitation platform was adapted for the purpose of cardiopulmonary rehabilitation treatment. In fact, some physiological data must be monitored in direct time, such as heart rate and oxygen saturation. Therefore, a Nolin probe was inserted into the platform. Method: A pre/post test design with no control group was used for this pilot study. An elderly person living in the community with COPD was recruited prior to discharge from his rehabilitation program. Telerehabilitation treatments (16 sessions) were performed by two trained physiotherapists from the service center. Function (locomotor performance in walking) and quality of life were measured by an independent rater in face-to-face assessments before and at the end of the treatments. Results: Locomotor performance did not change between T1 and T2. However, all four aspects of quality of life improved over time T1 Patient 1 T2 CONCLUSION Dyspnea Dyspnea Fatigue Emotions Emotions Control Control P. Boissy, H. Corriveau, H. Moffet, N. Marquis, L. Dechêsne, F. Cabana REFERENCES 1. Cooper, R.A., et al., Telerehabilitation: Expanding access to rehabilitation expertise. IEEE Proceedings., (8): p Wakeford, L., et al., Telerehabilitation Position Paper. American Journal of Occupational Therapy, (6): p Forducey, P.G., et al., Using telerehabilitation to promote TBI recovery and knowledge transfer. NeuroRehabilitation, (2): p Tousignant, M., et al., Home telerehabilitation for post-knee arthroplasty: a pilot study. International Journal of Telerehabilitation: p Tousignant, M., et al., Home-based telerehabilitation for older adults after discharge from an acute hospital or rehabilitation unit: a proof-of-concept study and cost estimation. Disability and Rehabilitation Assistive Technology, (4): p Hamel, M., R. Fontaine, and P. Boissy, Home Telerehabilitation for Geriatric Patients. IEEE Journal of Engineering in Medicine and Biology, (4): p A residential Internet network is of sufficient quality to make home teleprocessing feasible. Innovative technologies enhance teletreatment sessions. Telerehabilitation appears to be a practical alternative to home visits by a physiotherapist to deliver rehabilitation services. THANK YOU 34

71Research on Fuzzy Comprehensive Evaluation of Performance Analysis in Data Storage Engineering Model Design Yan Wang Glorious Sun School of Business and Management, Donghua University Shanghai City, China and Jiajin Le School of Computer Science and Technology, Donghua University Shanghai City, China and Dongmei Huang College of Information Technology,Shanghai Ocean University Shanghai City,201306,China ABSTRACT This paper discussed the data warehousing project model design process and the various factors that affected the performance of data warehousing model design. data storage Then, the author introduced the multi-stage fuzzy comprehensive evaluation method, and studied its application to data warehouse model performance evaluation, and gave the data warehouse performance measurement model based on the comprehensive evaluation method. multistage diffuse. Keywords: Data Warehouse, Fuzzy Comprehensive Evaluation, Performance Analysis and Model Design, 1. Introduction The Data Warehouse provides effective system support and information for the decision-making process of end users. It is a topic-oriented data system that changes over time. The data it contains is extracted and converted from many existing business data. Its characteristic determines that data warehouse design is not only business-driven engineering, but also data-driven. Therefore, engineering design should focus on effective data extraction, synthesis, integration, and extraction from existing database resources. In data warehouse project development, data-driven model design is the key point. In practice, data warehouse performance issues become increasingly important[1]. The enormous scale of the data warehouse and the rapidly growing volume of data lead to stringent requirements on system performance and are of particular importance in monitoring and evaluating performance in data warehouse engineering. It is absolutely necessary that we analyze and evaluate the performance during the design stages of the data warehouse model. 2. Data Warehouse Model Design and Performance Analysis Data warehousing cannot sustain a constant workload. You must continue to understand users' analytics requirements and provide them with accurate and useful information for decision making, so establishing the data warehouse is a dynamic process to meet new user demands. The continuous and large-scale expansion of data, as well as the rapid growth of the user workload, make the performance index an essential objective to determine the quality of the data warehouse[2]. In the traditional system life cycle, there is always a lot of analysis and planning, but in the data warehouse design process, system developers do not have enough time to analyze and plan. Engineers are often required to integrate and build a data warehouse in a very short time. Therefore, system developers often do not have the opportunity to perform a comprehensive performance analysis and capacity design, and it is very difficult to manage performance in the entire data warehouse development process. As the workload is very difficult to predict, we often find performance issues last. Therefore, we must continue to evaluate and analyze performance during the development phase of the data warehouse. The model-based data warehouse design comprises 35

72model, logical model and physical model design. The conceptual model represents the relationship of "commercial information" in the real world. The logical model uses tables to store data, and it is also the relational model. The physical model involves the physical storage structure of these tables, such as the design of the table's index. The data warehouse performance analysis will be detailed below. Partitioning the subject field can provide a defined business value and data warehouse goal. Good Partition can reduce unnecessary system redundancy, reduce relationships between multiple subjects within the range of possibilities, and optimize performance. The public key of the subject field can represent the characteristics and data bindings of the data store, and cause the internal relevance of the data to be divided into the subject field; Public Key between Fields; Determination of the link; Field Attribute Group Subject Area Analysis; Classification of levels of granularity; data partition strategy; Relationship definition Data storage structure; index strategy; Data store location; Storage allocation Physical model User requirements analysis Conceptual model Logical model Design process Design process Design process Figure 1. Model design and performance factors 2.1 Conceptual model performance factors The conceptual model is a business model and requires the analysis of business decision makers, business domain knowledge experts and IT experts in the business to business system. The conceptual model establishes a robust model based on the integration and reorganization of the data in the existing database system. Therefore, we must first analyze the existing database system, clarify the content and organizational structure of the existing database system, and then consider how to build an effective conceptual model. The original database design documents and the data relationship pattern in the data dictionary may clearly reflect the existing content of the enterprise, but a conceptual data warehouse model is a global business-oriented model and provides a unified conceptual view for data integration from various application-oriented databases. Therefore, determining the subject field of the model can have a good design edge and will not introduce redundant data. Clear definitions of the decision type can help designers ensure the best decision optimization point. At the same time, the preference of decision makers and the sources of information for decision making are also important factors in model customization. clear. The strength or relevance linked by public key between the subject fields has a major impact on the interaction and integration of data. Finally, the definition of the field attribute group must reflect the abstract subject field. In a conceptual model, performance appraisal has a very vague standard. But subject field partitioning, public key between fields, binding determination, and field attribute group are all important factors that affect performance. 2.2 Logical model performance factors The logical model defines the logical implementation of the topic being loaded and records the relevant content in the data warehouse metadata[3], which includes the definition of granularity, data partitioning strategy, the division of the table and the definition of the data sources, etc. We have to consider the performance of the following factors below. 1. Theme field analysis In the conceptual model, the basic theme field has been established, but the data warehouse design is an iterative process, we need to analyze some basic theme fields established in the conceptual model, and select the first theme to implement. The main consideration of the selection strategy is that the topic must be large enough to be integrated into a system that can be applied, but must 36

73be small enough to be quickly facilitated and implemented. If the theme is too large and complex, we can implement a significant subset of it. In each feedback process, it is necessary to analyze the thematic field. The size of the subject field is an important factor in the performance of the logical model. Second, the subject field must be independent, although there are interactions, but it must have independent meaning and a clear border to define the data. Finally, the subject field must be completeness, that is, all analysis data for this defined topic can be found in this subject field. If it is new data, new data should be added to the subject. This is a gradual optimization evaluative process. 2. Partition the level of granularity In the logical design of the data warehouse it is necessary to determine the levels of granularity that directly affect the amount of data and the appropriate type of query. By estimating the number of data rows and the number of DASDs, we can determine whether to choose single granularity or multiple granularity, as well as the granularity levels of the partition. Appropriate partitioning levels of granularity contribute to overall data warehouse performance. The logical model design faces a granularity determination design problem that affects the size of the amount of data stored in the data store and the type of query that the data store can return. The rational determination of granularity has a direct impact on other aspects of the design. Therefore, a balance must be struck between the amount of data size and the level of detail. 3. Determining the data partitioning strategy To select the appropriate data partitioning criteria, the following general performance factors should be considered: The amount of data that determines if and how the data partitioning should be partitioned, the requirements for data analysis and processing what are the data partitioning standards for data partitioning analysis and processing and are they closely related to the object, whether the partitioning strategy is simple and is the easiest to implement. Finally, the data partition must also conform to the granularity levels. 4. Definition of relational mode Each topic is implemented by a series of tables, and these tables are linked together to form a complete topic using the public key. In the conceptual model, we have identified the basic subject of the data store and described the public key keys and the basic contents of each subject. In the logical model, the selected subject must be split to form a series of tables and the relationship mode between the tables must be defined. Defining the relationship mode affects the performance of data warehouse queries and parsing. 2.3 Physical model performance factors Physical model design consists of defining the data storage structure and index strategy, determining data location and storage allocation. To achieve a high-performance physical model, the designer must explicitly define a table's index structures and index optimization, classify and merge the table reasonably, establish a hardware-optimized storage structure, model space and time efficiency based on the data environment, data usage frequency, data scale and response time requirements, optimize time and space efficiency, and make external storage devices clear. External storage equipment design involves physical storage design, partitioning principle, block size requirements, device I/O characteristics, etc. Class indices Conceptual model design (u 1) Logical model design (u 2) Physical model design (u 3) Second class indices Subject field partition; (U 11) Public key between fields (U 12) Determining the link between the subject area (u 13) Determining the group of field attributes (u 14) Analysis of the subject area (U 21) Classification of granularity (u 22) Strategy Data Partitioning (U 23) Defining Relationship Levels (U 24) Determining Data Storage Structure (U 31) Index Strategy (U 32) Data Store Location (U 33) Storage Allocation ( U 34) We will quantify the performance index of the data warehouse through a quantitative analysis. Taking into account that all the performance factors are dynamic and most of the factors are difficult 37

74to describe quantitatively, but will reduce information asymmetry by using the fuzz evaluation process[4]. Therefore, this paper will use the fuzzy integral evaluation method to quantify the various factors and evaluate the performance of the data warehouse model design process. 3.1 Establishment of fuzzy evaluation sets of the data model performance index , n) is the weight of ui: 0 A i 1, A i n i1 1 U i ={U i1 U i2 U in }(i=12 n. ). The weight set is A i ={a i1 a i2 a in }, where a ij (j=1,2, n) is the weight of fu ij, 0 a ij 1, a ij n j1 1 The performance indices in Data Warehouse model design includes conceptual model design performance, logical model design performance, and physical model design performance. Therefore, the set of performance factors of the model design process is given below: U={U 1 U 2 U 3 }=(Conceptual model design, Logical model design, Physical model design) U1= {U 11 U 12 U 13 U 14 U 15 }= (Determination of subject domain classification, Determination of domain public key, Determination of subject area link, Determination of field attribute group) U2={ U 21 U 22 U 23 U 24 U 25 }= (Analysis of the subject area, Classification granularity levels, Determination of the data partition strategy, Definition of the relationship between schema definitions) U3={U 31 U 32 U 33 U 34 U 35 }=(Data Storage Structure Determination, Index Strategy Determination, Data Store Location Determination, Storage Allocation Determination) According to the above-mentioned system analysis, we structure the system of indexes as follows: Table 1 the performance evaluation index system 3.2 Multi-stage mathematical model We adopt the comprehensive evaluation method that combines AHP with FUZZY to carry out the enterprise management environment evaluation, the procedures are as following[3]: (1) Obtain the set of first class indices: U = (U 1, U 2, U 3), and their respective weights: W = (W 1, W2, W3). (2) Obtain the set of second class indices: U i = (U i1, u i2, U in ) and their respective weights: w U = (W U1, W U2,, W Un ). (3) this set of modal particles: Suppose V = (v 1, v 2, v 5 ) as a set of modal particles, showing various degrees of bottom-up judgment, as shown in Table 2 below: Table 2. The distribution of the results of the judgment of the indices Index Grade Index Score Index Judgment V 1 V 2 V 3 V 4 V 5 [90,100] [80,90) [60,80) [30,60) [0, 30) Quite good Good average bad Quite bad (4) Give the comprehensive evaluation to the factors of each U i according to the basic model: R i = r11 r21... rm1 r r r m r15 r25... r m5 Meanwhile, r sj means that the subordinate degree of the index of the factor r is in the judgment of v j, which satisfies: 0 r sj 1, m r sj j1 1( s 1,2,... m; j 1,2,...5) ( 5) Give a full evaluation of the Ui factors according to the basic model. The fuzzy matrix R u is used to represent the fuzzy relation of utov: U i =W i R u =(U 1,U 2,,U 5 ) (6) Use the synthetic operation of the fuzzy matrix to obtain a model of integral evaluation: a=wi Ru,yR u =(U 1, U 2, U 3 ). To conclude, (A, V, R) constitutes a mathematical model of integral fuzzy evaluation, the fuzzy matrix R converts the fuzzy sets U and U i into the fuzzy set A, and then A is the integral fuzzy evaluation for the objective. (7) Assume that F = (f 1, f 2 f 5 ) T F is a set of scores and a vertical vector, which means that the standard scores for each factor 38

75inthesetv. (8)We can calculate the Z score of the data warehouse performance by multiplying the vectors, Z is the algebraic value: Z=AF. Then we can evaluate the business performance of the data warehouse according to the valve of F. 3.3 Data and empirical analysis Taking a practical data warehouse project as an example, we choose some relevant data warehouse experts, senior engineers from a software company in China to rate the performance. We assume that the maximum score is 100, then we divide them and organize them into five grades according to their scores, each expert gives the score for each item, the evaluation matrix of all the factors is as follows: 1/ 6 0 1/ 3 0 2 / 3 0 1/ 3 0 1/ 6 5/ 6 1/ 3 1/ 6 0 1/ 6 0 2/ / 6 In the matrix, r sj = the total score of j rating of s item / the number of people who qualify The AHP can obtain the weights of the subfactors, the result is the following: A 1 =(0.75,0.25),A 2 =(0.22,0.36,0.09,0.33) The degree vector subordinate to the modal set V is as follows : A=(0.59,0.41) The fuzzy evaluation of the second layer is as follows: B 1 =A 1 R = (0.75, 0.25) 0) =(0.22, 0.24, 0.25, 0.29, B 2 =A 2 R = (0.22) , 0.36, 0.09, 0.33) 1/ 6 0 1/ 3 0 2/ 3 0 1/ 3 0 1/ 6 5/ 6 1/ 3 1/ 6 0 1/ 6 0 2/ / 6 =(0.07 , 0.18, 0.42, 0.28, 0.06) And the fuzzy evaluation of the first layer as follows: B=AR=A B 1 B , ,0.22,0.31,0.29, Then the value of the performance evaluation is as follows: B =(0.16,0.22,0.31,0.29 ,0.02) Suppose the score is set: F=100,85,70,55,40 T, then the performance evaluation value is as follows: Z=AF=(0.16 ,0.22,0.31,0.29,0.02) (40,55,70,85,100 T = Decision - developers can compare project performance results to critical values ​​of their performance indices to determine whether to support project design activities model. The above results prove that the management environment of this company is in the middle level. At the same time, the evaluation model weight parameter should be adjusted for different types of data warehouse model design. 4. Conclusions As data warehouse model design performance evaluation indices, the evaluation results obtained by the fuzzy integral evaluation method can clarify the performance analysis during the model design phase. At the same time, the evaluation results can be compared to the actual performance values ​​after the data warehouse structures are finalized and the expert scoring weights of the fuzzy comprehensive evaluation can be adjusted. 5. Acknowledgments The study is funded by a Shanghai Scientific Committee Key Project, Case Number: 08dz References [1] Gollarelli M, Maio D, Rizzi S, Italy, Application of Vertical Fragmentation Techniques in Logical Database Design multidimensional data, 2000 [2] Materialized Views, Schrefl M, Thalhammer T, Australia, Austria, On Making Data Warehouse Acrive, 2000 [3] Paulraj Ponniah, Fundamentals of Data Warehousing [M] Beijing: Publishing House of Electronics Industry, 2004 .pp [4] Li Hongxing, Wang Peizhuang, Fuzzy Mathematics [M].Beijing: National Defense Industry Press, 1993.pp

76Remediation of Crude Oil Contaminated Soils Using Supercritical CO2 Adel A. Azzam, Ali H. Al-Marzouqi*, Abdulrazag Y. Zekri Department of Petroleum and Chemical Engineering, UAE University, Al-Ain, P.O. Box: 17555, United Arab Emirates * Corresponding author. Address: hassana@uaeu.ac.ae, Fax: ABSTRACT Hydrocarbon contamination of soils and sediments is an environmental concern that requires more efficient remediation techniques. Pure and modified supercritical carbon dioxide (SC CO 2 ) was used for the extraction of petroleum hydrocarbons from soils contaminated with crude oil. Effect of CO 2 flow (1 and 4 ml/min), temperature (80 and 160 C), pressure (250 and 350 bar) and addition of 5% (v/v) of organic solvent (heptane or toluene) on the The extraction efficiency and the composition of the extracted hydrocarbons were investigated. The maximum extraction efficiency (92.26%) was obtained at 80 C and 350 bar corresponding to a CO 2 modified with 5% (v/v) heptane. The CO2 removal efficiency increased with pressure and decreased with temperature. Chemical modification of CO2 by adding heptane increased the extraction efficiency. Soil analysis after the extraction process shows that pure SC CO 2 was able to remove up to 92.86% of TPH in the contaminated soil. In addition, a significant reduction in the PAH level was observed. The extraction with supercritical fluids proved to be an efficient method for the remediation of soils contaminated with hydrocarbons. Keywords: remediation, contaminated soil, crude oil, supercritical CO2 1. INTRODUCTION Soil contamination with crude oil and petroleum products is often observed at industrial sites, causing environmental contamination, which can be hazardous to the health of plants, animals and humans [1-4]. Hydrocarbon molecules can contain complex and dangerous chemical mixtures, such as total petroleum hydrocarbons (TPH), polychlorinated biphenyls (PCBs), and polycyclic aromatic hydrocarbons (PAHs). The removal of such compounds from contaminated sites is a significant and challenging problem. The most important and widely used remediation methods are incineration, thermal desorption, biological remediation, chemical treatment, and solvent extraction [5]. Conventional techniques such as landfill disposal, thermal desorption, incineration, and liquid solvent extraction are costly and carry risks associated with air and residual contamination. Biological remediation is a fairly slow process, with potential logistical and practical drawbacks. Despite great efforts and expenditure of resources to develop technically and economically effective cleanup processes for contaminated soils, no widely accepted method has been found and further research is still needed. Therefore, new methods are being investigated to improve remediation efficiency and reduce remediation costs or time. For three decades, supercritical fluids (SCF) have been used as extraction media to remove various types of substances from solid matrices. The unique properties of SCFs that make them technically attractive are their increased ability to dissolve organic compounds, an ability that can be easily adjusted by changing the temperature and/or pressure, thus changing the fluid properties from gas to liquid. Such properties allow SCFs to dissolve and carry away materials like a liquid, but also to enter very small pores like a gas. The most popular fluid is supercritical carbon dioxide (SCCO 2 ) because it is non-toxic, non-flammable, chemically stable, readily available, inexpensive, environmentally acceptable, and can be easily separated from products. Although SCF technology has been successfully implemented for environmental remediation in the laboratory, its commercialization still lacks the significant technological improvement required to achieve economic viability. Like other new technologies, SFE technology, which uses CO2 as a fluid, has its specific problems. One of these problems is the limited ability of SCCO 2 to dissolve and separate high molecular weight or polar organic compounds, even at very high densities. To increase the efficiency of the SFE process for such compounds, the selectivity and solubilizing power of SC CO 2 can be improved by adding polar organic compounds, known as modifiers. Significant research has been carried out to study various aspects of pollutant removal by SC CO 2. In several critical reviews [1,6-8] and hundreds of other scientific articles. Supercritical CO 2 has been used successfully to extract a variety of organic compounds such as polycyclic aromatic hydrocarbons (PAHs) [9-12], polychlorinated biphenyls (PCBs) [1,11,13-16], pesticides [17-18] , and hydrocarbons [11,19-23]. However, data for CO 2 extraction at extremely high pressures and temperatures are scarce in the literature, especially for crude oil contaminated soils. Al-Marzouqi et al. [23] showed that SC CO 2 at 300 bar and 120 C is capable of extracting about 70% of the hydrocarbons from a typical UAE soil contaminated with crude oil. The aim of this study was to investigate the ability of pure and modified CO 2 under supercritical conditions to remediate crude oil contaminated soils and achieve higher extraction efficiencies. 2. EXPERIMENTAL MATERIALS Carbon dioxide (% purity) was supplied by Abu Dhabi Oxygen Company. Crude oil (number average molecular weight = g/mol and density = g/ml) was obtained from the Bu Hasa oil field (Abu Dhabi, United Arab Emirates). Chemical modifiers (n-heptane and toluene) and organic solvents (dichloromethane and methanol) were 99% pure analytical grade and were supplied by Sigma Aldrich. Soil samples (bulk density = 1.6 g/ml and mean particle size = 150 µm) were collected from the Sahel oil field in the United Arab Emirates. The porosity and permeability of the soil were 35% and Darcy, respectively. Experimental design The extraction of hydrocarbons with SCFs from contaminated soil was carried out following the complete factorial experimental design with four factors: pressure (250 and 350 bar), temperature (80 and 160 C), flow rate (1 and 4 ml/min) and

77type of fluid (pure SC CO 2, SC CO 2 modified with 5% (v/v) of toluene and SC CO 2 modified with 5% (v/v) of n-heptane). Each experiment was repeated twice, resulting in a total number of 48 experiments. The experiments were performed in randomized order to eliminate various types of biases due to uncontrolled nuisance factors. Statistical analysis was performed using the SPSS statistical package (SPSS inc., Version 15.0). All statistical analyzes of the effects of variables on extraction efficiency were performed using a multi-way analysis of variance (ANOVA) with two replicates per cell. Experimental apparatus The experimental set-up consisted of a 260 ml capacity syringe pump and control system (ISCO 260D), a 100 ml stainless steel extraction chamber (DBR-JEFRI BE) and a cold trap as previously described. (Al-Marzouqi et al. al., 2007). The extraction chamber was kept in an air circulation oven (Memmert ULE 400) with a temperature control ranging from C. The pressure inside the extraction chamber was measured and controlled by the ISCO system. A micro-dosing valve (HIP 15-12AF1-V) was used as an expansion valve at the outlet of the extraction chamber to achieve good flow control. Circulating methanol at -15 o C was used as a cold trap to separate CO 2 from other components of the mixture. Experimental procedures Soil samples were spiked with 10% w/w crude oil and placed in the extraction chamber. The extraction chamber was kept in the oven at the desired temperature until thermal equilibrium was reached (30-60 min). The chamber was then pressurized with CO2 to the desired pressure and held for another 30 minutes until equilibrium was reached. In the case of modified CO 2, the second syringe pump was used to deliver the cosolvent (heptane or toluene), which was mixed with the CO 2 stream in the desired ratio. Then, pure and modified carbon dioxide was added under supercritical conditions to the ISCO SCF extraction system (SFX system) and equilibrated for about 15 minutes. The SCF was allowed to flow through the tubing coil and into the extraction chamber from the bottom. The fluid was equilibrated with the spiked soil sample for at least 30 minutes. The supercritical solution was then allowed to flow into a vial and the extract was separated from the supercritical fluid by depressurizing the system in the cold trap. Residual hydrocarbons in the soil, after the SFE process, were also analyzed for the concentration of total petroleum hydrocarbons (TPH) and polycyclic aromatic hydrocarbons (PAH). 3. RESULTS AND DISCUSSIONS CO 2 extraction efficiency (the ratio of extracted hydrocarbons to the initial amount of crude oil in place) is used throughout this study to assess the ability of CO 2 to extract hydrocarbons from the soil. . The average extraction efficiencies obtained in each of the investigated operating conditions are tabulated in Table 1. The lowest extraction efficiency value (68.38 % ± 1.99) was obtained for modified SC CO 2 (with a addition of 5 % toluene) at 250 bar and 160 C, while the maximum efficiency (92.26 % ± 5.40) was found for SC CO 2 (with an addition of 5 % n-heptane) at 350 bar and 80 C. The highest efficiency obtained by SC CO 2 alone (without modifier) ​​was % ±0.46, which was obtained at 350 bar and 160 C. It is believed that the complexity of the crude oil mixture containing many compounds with Significantly different physicochemical properties that vary with temperature and pressure cause such a large variation in SC CO 2 extractability. have a significant effect, ie, Sig. >0.05. In addition, the pressure and the type of fluid interact. This means that the effect of pressure depends on the fluid used and vice versa, which does not occur with temperature. However, when checking the validity of the ANOVA model through residual analysis, it was found that the assumption of normality was met, that is, the p value was greater than 5%. Effect of temperature Figure 1 illustrates the effect of temperature on extraction efficiency. The values ​​in the figure (including the bars showing the standard error of the mean) represent the mean value of the extraction efficiency for 24 experiments at each temperature. The results indicate that the temperature has an inverse effect on the extraction efficiency. This could be due to the increase in kinematic viscosity and interfacial tension due to the decrease in CO 2 density with increasing temperature. Extraction efficiency (%) Temperature (C) Figure 1. Effect of temperature on extraction efficiency. Effect of flow rate The effect of flow rate (1 and 4 ml/min) on the extraction efficiency is shown in Figure 2. The values ​​in the figure represent the mean value of the extraction efficiency for 24 experiments with each flow rate. Decreasing the flow rate generally ensures more contact time and results in higher extraction efficiencies for a given amount of CO 2 used. However, saturation is achieved at certain flow rates, below which flow rate does not affect solvent extraction efficiency. The results indicate that the flow does not affect the extraction efficiency for the conditions used in this study. Therefore, the extraction process should be performed at 4 mL/min to reduce the extraction time. Extraction efficiency (%) Flow (ml/min) Figure 2. Effect of CO 2 flow on extraction efficiency. The results of the multi-way ANOVA based on the original values ​​of the extraction efficiency show that the temperature, the pressure and the type of fluid have a significant effect on the extraction efficiency, but the CO 2 flow rate does not 41

(Video) How to Get an ISBN for a Book: Do you need one?

78Table 1. Properties and average extraction efficiencies of supercritical fluids for crude oil contaminated soil samples. Temperature (C) Pressure (bar) CO 2 Flow (ml/min) Modifier 5% (v/v) CO 2 Density (g/ml) CO 2 Viscosity (µpa s) CO 2 Kinematic Viscosity 10 8 (m 2 / s ) Average extraction efficiency (%) ± SEM* ± ± ± ± ± ± ± ± n-heptane ± n-heptane ± n-heptane ± n-heptane ± n-heptane ± n-heptane ± n-heptane ± n-heptane ± Toluene ± Toluene ± Toluene ± Toluene ± Toluene ± Toluene ± Toluene ± Toluene ± 2.38 *SEM: Standard error of the mean 42

79Effect of pressure and fluid type Due to the interaction between pressure and fluid type, the effect of these parameters cannot be shown separately, therefore Figure 3 shows the effect of pressure and fluid type. of fluid in the extraction efficiency. Each point in Figure 3 represents the average value of the extraction efficiency of 8 experiments for each type of fluid at a given pressure. As shown in the figure, the extraction efficiency of pure and modified SC CO 2 increases as the pressure increases. This could be due to the decrease in kinematic viscosity due to the increase in CO 2 density with increasing pressure. In addition, the extraction efficiency of SC CO 2 modified with 5% (v/v) heptane is higher than that of pure SC CO 2 and SC CO 2 modified with 5% (v/v) toluene. The higher extraction efficiency when using heptane can probably be attributed to the richness of Bu Hasa crude oil in non-polar aliphatic hydrocarbon compounds such as n-alkanes (C 6 -C 22 ) as reported by Al-Marzouqi et al. . (2007). However, due to the interaction between the pressure and the type of fluid, the extraction efficiency of SC CO 2 modified with 5% (v/v) of toluene turns out to be higher than that of pure SC CO 2 at low pressure (250 bar), but less than high pressure (350 bar). Extraction efficiency (%) Pure SC CO2 SC CO2 + Toluene SC CO2 + Heptane Pressure (bar) the effect of high temperature, which increases the volatility of PAHs and therefore increases their solubility in the fluid. 4. CONCLUSIONS The effects of temperature, pressure, CO 2 flow rate and two modifiers (heptanes and toluene) at 5% (v/v) on the SC CO 2 extraction capacity were investigated. The results of this study indicate that SC CO 2 is an effective solvent, leading to high extraction efficiencies when applied at high pressures. Furthermore, the results of this study show that the flow rate does not have a significant effect on the efficiency of SC CO 2 . Therefore, it is recommended to use the high flow rate, i.e. 4 ml/min, to reduce the time required for the remediation of contaminated soils. Furthermore, the temperature, that is, 80 and 160 C, does not have a significant effect on the extraction efficiency of SC CO 2 at high pressure (350 bar). Therefore, it is recommended to apply the low temperature during the extraction process to save energy. Chemical modification of CO 2 by adding 5% heptane was more efficient than the same level of modification with toluene. The optimal condition to extract hydrocarbons from soils contaminated with Bu Hasa crude oil was using SC CO 2 modified with 5% heptane at high pressure (350 bar), low temperature (80 C) and a flow rate of 1 ml/min. Supercritical CO 2 was able to eliminate 92.86% of the TPH present in the contaminated soil. In addition, pure CO 2 SC and CO 2 SC chemically modified with 5% (v/v) heptane were able to significantly reduce PAH concentration levels in Bu Hasa crude oil contaminated soil. Figure 3. Effect of pressure and fluid type on extraction efficiency. Total Petroleum Hydrocarbon (TPH) Analysis The ability of neat SC CO 2 to extract TPH from Bu Hasa crude oil-saturated soil was investigated for selected runs (Table 2). As shown in the table, pure SC CO 2 at high pressure (350 bar) and low temperature (80 C), is capable of extracting 92.86% of TPH from contaminated soil compared to 90.98% of TPH extraction at the same and higher pressure. temperature (160C). The removal percentage was lower at the lowest pressure of 250 bar (83.54% and 76.15% at 80 and 160 o C, respectively), which coincides with the results obtained for SC CO 2 extraction efficiency. This study shows that pure SC CO 2 can effectively remediate contaminated soil and thus reduce the harmful effects of TPH compounds in the environment. Polycyclic Aromatic Hydrocarbon (PAH) Analysis PAH measurement was carried out in selected cycles to investigate the efficiency of SC CO 2 in extracting PAH from Bu Hasa crude oil contaminated soil samples. The concentration of 16 PAH in the selected soil samples after the SFE process is tabulated in Table 3. The results show that the SC CO 2 modified with 5% (v/v) heptane at low temperature (80 C) and high pressure (350 bar) could not completely remove some of the PAHs from the contaminated soil. Furthermore, the extraction by pure SC CO 2 at the same pressure and temperature was the worst among all other conditions. However, pure SC CO 2 at 160 C and 350 bar resulted in better extraction of all 16 PAHs. This could be attributed to 43

80Table 2. TPH analysis of clean soil, crude oil-enriched soil before SFE, and treated soil after SFE process. SFE Temperature SFE Pressure TPH TPH Removal sample (C) (bar) (µg/mg) (%) Extraction efficiency (%) Clean soil < 0.23 Soil enriched with crude oil before SFE Treated soil after SFE Table 3 PAH analysis of clean soil, crude oil enriched soil before SFE and treated soil after SFE process. Removal efficiencies (%) are shown in parentheses. The removal efficiency was assumed to be 100% for a PAH concentration < LOD*. Sample Clean soil Soil enriched with crude oil before SFE Soil treated after SFE Temperature (C) Pressure (bar) Modifier Heptane PAH (µg/kg) Naphthalene < <7.89 (100%) <7.89 (100%) <7.89 (100%) <7.89 (100%) 78 (99.26%) Acenaphthylene <10.7 <10.7 <10.7 <10.7 <10.7 <10.7 <10 .7 Acenaphthene < (99.75%) 16.8 (99.48%) 15.4 (99.52%) 16 (99.50%) 19.5 (99.40) %) Fluoride < <5, 53 (100%) <5.53 (100%) <5.53 (100%) <5.53 (100%) <5.53 (100%) Phenanthrene < (97.32%) 66.8 (99 .35%) 75.1 (99.27%) 553 (94.69%) 292 (97.19%) Anthracene <4.99 <4.99 <4.99 <4.99 <4.99 <4 .99 <4.99 Fluoranthene < (95.53%) 32.3 (96.58%) <4.98 (100%) 8.19 (99.13%) 40.2 (95.75) %) Pyrene < (76.43%) 274 (93.01%) 63.1 (98.39%) 393 (89.97%) 622 (84.13%) Benzo(a)anthracene < <4.90 (100 %) 9.53 (99.18%) 11.4 (99.02%) <4.90 (100%) 9.85 (99.15%) Crycene < (99.11%) <4.92 ( 100%) 10.3 (99.06%) <4.92 (100%) 10.8 (99.02%) Benzo(b)fluoranthene <4.54 <4.54 <4.54 <4.54 <4.54 <4.54 <4.54 Benzo(k)fluoranthene <4.61 <4.61 <4.61 <4.61 <4.61 <4.61 <4.61 Benzo(a)pyrene <4.99 <4.99 <4.99 <4.99 <4.99 <4.99 <4.99 Dibenzo(a, h)anthracene < <5.34 (100%) <5.34 (100 %) <5.34 (100%) <5.34 (100%) <5.34 (100%) Benzo(g,h,i)perylene < (95.06%) <5.45 (100%) <5.45 (100%) <5.45 (100%) 13.8 (98.16%) Indeno(1,2,3-cd)pyrene < (88.89%) <5.42 (100%) ) <5.42 (100%) <5.42 (100%) <5.42 (100%) Extraction efficiency (%) * LOD: limit of detection. 44

815. REFERENCES [1] G. Anitescu and L.L. Tavlarides, Supercritical Extraction of Contaminants from Soils and Sediments, The Journal of Supercritical Fluids, Vol. 38, No. 2, 2006, pp. [2] E. W. Liebeg and T. J. Cutright, Investigation of bioremediation enhanced by the addition of macro- and micronutrients in PAH-contaminated soil, International Biodeterioration and Biodegradation, vol. 44, 1999, pp [3] D. Sarkar, M. Ferguson, R. Datta, S. Birnbaum, Bioremediation of petroleum hydrocarbons in contaminated soils: comparison of biosolids addition, carbon supplementation, and monitored natural attenuation, Environmental Pollution, vol. 136, No. 1, 2005, pp. [4] N. Vasudevan and P. Rajaram, Bioremediation of Soil Contaminated with Oil Sludge, Environment International, Vol. 26, No. 5 6, 2001, pp [5] S. Paria, Surfactant-enhanced remediation of organically contaminated soil and water, Advances in Interface and Colloid Science, vol. 138, 2008, pp. [6] P.K. Wong and J. Wang, The Accumulation of Polycyclic Aromatic Hydrocarbons in Lubricating Oil Over Time: A Comparison of Liquid-Liquid and Supercritical Fluid Extraction Methods, Environmental Pollution, Vol. 112, 2001, pp. [7] B.T. Bogolte, G.A.C. Ehlers, R. Braun, A.P. Loibner, Estimation of PAH bioavailability for Lepidium sativum using sequential supercritical fluid extraction in a case study with contaminated industrial soils, European Journal of Soil Biology, vol. 43, 2007, pp. [8] A.S. Pimenta, B.R. Vital, J.M. Bayona, R. Alzaga, Characterization of polycyclic aromatic hydrocarbons in liquid pyrolysis products of Eucalyptus grandis by supercritical fluid extraction and GC/MS determination, Fuel, vol. 77, No. 11, 1998, pp [9] C. Lutermann, W. Dott, J. Hollender, Combined modifier/in situ derivatization effects in supercritical fluid extraction of polycyclic aromatic hydrocarbons from soil, Journal of Chromatography A, vol. 811, 1998, pp. [10] S.B. Hawthorne, C.B. Grabanski, E. Martin, D.J. Miller, Comparisons of Soxhlet extraction, pressurized liquid extraction, supercritical fluid extraction, and subcritical water extraction for ambient solids: recovery, selectivity, and effects on sample matrix, Journal of Chromatography A, vol. 892, No. 1 2, 2000, pp [11] P. Hallgren, R. Westbom, T. Nilsson, S. Sporring, E. Björklund, Measurement of bioavailability of polychlorinated biphenyls in soil to earthworms by the selective extraction of supercritical fluids, Chemosphere, vol. 63, No. 9, 2006, pp [12] S. Sporring, S. Bøwadt, B. Svensmark, E. Björklund, Comprehensive comparison of classical Soxhlet extraction with Soxtec extraction, ultrasonic extraction, supercritical fluid extraction, extraction microwave-assisted extraction and accelerated solvent extraction for the determination of polychlorinated biphenyls in soil, Journal of Chromatography A, vol. 1090, No. 12, 2005, pp. [13] T. Nilsson and E. Björklund, Selective extraction of supercritical fluids as a tool to determine the fraction of PCBs accessible for uptake by chironomid larvae in a limnic sediment, Chemosphere, vol. 60, No. 1, 2005, pp. [14] W. Zhou, G. Anitescu, L.L. Tavlarides, Polychlorinated Biphenyl (PCB) Partition Equilibrium Between St. Lawrence River Sediments and Supercritical Fluids, The Journal of Supercritical Fluids, Vol. 29, No. 1 2, 2004, pp [15] E. Reverchon and I. De Marco, Supercritical Fluid Extraction and Natural Matter Fractionation, The Journal of Supercritical Fluids, vol. 38, no. 2, 2006, pp. [16] C. Gonçalves, J.J. Carvalho, M.A. Azenha, M.F. Alpendurada, Optimization of Supercritical Fluid Extraction of Pesticide Residues in Soil by Design of Core Compounds and Analysis by Gas Chromatography Tandem Mass Spectrometry, Journal of Chromatography A, vol. 1110, No. 1 2, 2006, pp [17] C. Quan, S. Li, S. Tian, ​​H. Xu, A. Lin, L. Gu, Supercritical fluid extract and clean-up of organochlorine pesticides in ginseng, Journal of Supercritical Fluids, Vol. 31, No. 2, 2004, pp [18] R. Jaffé, D. Diaz, K.G. Furton, E. Lafarg, High-temperature supercritical carbon dioxide extractions from geologic samples: sample matrix effects and contributions, Applied Geochemistry, Vol. 15, No. 1, 2000, pp. [19] M.O.P. Crespo and M.A.L. Yusty, Comparison of Supercritical Fluid Extraction and Soxhlet Extraction for the Determination of Aliphatic Hydrocarbons in Algae Samples, Ecotoxicology and Environmental Safety, Vol. 64, no. 3, 2006, pp. [20] A. Akinlua, N. Torto, T.R. Ajayi, Supercritical Fluid Extraction of Aliphatic Hydrocarbons from Niger Delta Sedimentary Rocks, The Journal of Supercritical Fluids, Vol. 45, No. 1, 2008, pp. [21] I. Okamoto, X. Li, T. Ohsumi, Effect of supercritical CO2 as an organic solvent on the performance of cap rock sealing for underground storage, Energy, vol. 30, no. 2005, pp. [22] R.J. Hwang and J. Ortiz, Mitigation of Asphaltics Deposition during CO 2 Flood by Enhancing CO 2 Solvency with Chemical Modifiers, Organic Geochemistry, Vol. 31, no. 12, 2000, pp. [23] A. H. Al-Marzouqi, A. Y. Zekri, B. Jobe, A. Dowaidar, Supercritical Fluid Extraction for Determination of Optimum Oil Recovery Conditions, JPSE, Vol. 55, 2007, pp.

82'An innovative approach to interdisciplinary teaching for students of built environments' Sarah DICKINSON Department of Built Environments, Sheffield Hallam University. Sheffield, South Yorkshire, S1 1WB, UK and Professor Paul WATSON Department of the Built Environment, Sheffield Hallam University. Sheffield, South Yorkshire, S1 1WB, UK and Ann FRANKS Department of the Built Environment, Sheffield Hallam University. Sheffield, South Yorkshire, S1 1WB, UK and Garry WORKMAN Department of the Built Environment, Sheffield Hallam University. Sheffield, South Yorkshire, S1 1WB, UK ABSTRACT Interdisciplinary teaching is essential for students embarking on a course within the Built Environment. This article explores how the implementation of interdisciplinary teaching activities can improve the learning experience of students, preparing them for the Industry and equipping them with the necessary skills to improve their integration and efficiency, when working within the construction industry. The paper further explores the teaching methods used in the Built Environment courses at Sheffield Hallam University and the benefits they bring to student development. It also identifies how interdisciplinary teaching is integrated within courses. In addition, the document describes the collaboration to promote the transfer of knowledge between the different professional disciplines within the Built Environment. The authors rely on feedback from students, employers, and professional bodies to demonstrate the success of the student learning experience. These modules make it easier for the student to reflect on her experience and develop personally and professionally. This document reflects on the experience of both staff and students to provide best practice examples that can be disseminated to other universities. Keyword(s): Built environment, Collaboration, Interdisciplinary, Knowledge transfer, Reflective practitioner 1. INTRODUCTION The construction industry has complex processes in which communication is a key activity to achieve a successful project. Therefore, particularly with reference to the current economic downturn, employers are now looking for students who are more confidently "ready to work", have the ability to communicate effectively across disciplines, and can work as part of a cohesive team. and cash. That is why incorporating interdisciplinary teaching is a vital imperative for all Built Environment courses. The success of engineering and construction projects depends to a large extent on the strength of the multidisciplinary teams involved. The various ranges of skills and competencies required for a construction project must be molded into a coherent holistic whole. Latham's report (1994) emphasized the need for the UK construction industry to work as a team and advocated the need for interdisciplinary work at the design stage of a project. Therefore, it is essential that universities address this critical issue by providing opportunities for interdisciplinary student work in Built Environment courses. The teachers coordinate the modules, support the students and facilitate the learning process, but the students have to manage their practice groups. As in the industry they are obliged to create a professional 46

83working relationship with the group and managing their time and resources efficiently and effectively. In developing these modules, key issues need to be addressed including; student experience in relation to manageable academic content and workload management for academic staff. And incorporate interdisciplinary learning into the Built Environment Program. Sheffield Hallam University, along with many other universities, offers a range of courses in the built environment such as Building Surveying, Quantity Surveying and Construction Management; however, in reality, all of these disciplines need to be aware of what each discipline does and gain some experience. challenges that are associated with them working together. Sheffield Hallam University's Department of Built Environment recognizes the importance of collaboration across disciplines and has introduced specific modules that focus on this aspect of interdisciplinary learning. For many students, the prospect of starting their new construction career; linking and integrating with other disciplines can be daunting. Therefore, it is imperative that students understand and experience organizational behavior within the industry in which they are going to work, and it would be unwise for their studies to be entirely in one discipline. To develop communication skills and prepare students for this interdisciplinary work in the workplace, students complete two modules, in which they work in interdisciplinary teams, consisting of building surveyors, quantity surveyors, and construction managers. These modules are: Interdisciplinary Project at Level 4 (Year 1) This module introduces students to the complex, multidisciplinary nature of the construction sector they have chosen to study. In a project-driven environment, they develop an understanding of their own professional role and that of others within a small, multi-disciplinary team representing four key professions; Architectural technology, building surveying, construction management, and quantity surveying. Within that team, students explore issues related to design, building valuation, the construction process and measurement, specialized knowledge to solve problems and draw conclusions. Students can apply the knowledge and skills developed in previous modules and draw on previous work experience from their internship year to develop solutions in a client-driven project. This means that the module has to be dynamic and flexible and can change from year to year. It was designed to meet the requirements of professional bodies and the industry need for problem-based and interdisciplinary work. 2. RATIONALE FOR INTRODUCING INTERDISCIPLINARY PRACTICE INTO THE BUILT ENVIRONMENT Reflective Learning Both modules are simulated and problem-based, building on individual students' knowledge of their specific discipline by reflecting on their previous studies. They must be able to communicate effectively and understand the consequences of their decisions. The achievement of the learning outcomes of the modules coincides with 3 of Knowles' 5 principles of adult learning, which: "Adults have accumulated experiences that can be a rich resource for learning. Adults are ready to learn when they experience a need to know something. Adults tend to be less focused on the issue than children; they are increasingly focused on problems." (Quoted by Fry et al 1999, p.25) [1] Experiential learning, as the experience gained throughout students' careers and during job placement, plays a central role in the interdisciplinary learning process. Kolb's (1984) popular experiential learning theory is relevant to interdisciplinary learning in the sense that ideas and knowledge from student studies in relation to problems encountered in practice can be shaped and reshaped through of reflection on experience. Kolb's Learning Cycle can be adapted for interdisciplinary teaching, as shown in Figure 1.0. The module provides a coherent and integrated introduction to the four selected professional roles in construction. Interdisciplinary aspects are reinforced through an assessment that involves both a single profession and multidisciplinary groups. Integrated Project at Level 6 (Year 4) At final year level, students have developed a comprehensive knowledge base and the professional confidence to work in small interdisciplinary practice teams, providing external consultancy services on a project focused on the client within a professional context. 47

84Proceedings of the 3rd International Multiconference on Engineering and Technological Innovation (IMETI 2010) 48

85teach both a method of approach and an attitude towards problem solving." (Schwartz et al 2001, p.2) [7] Professional Competencies The following list demonstrates the professional competencies of the Chartered Institute of Building (CIOB) [8] that are achieved on completion of the two interdisciplinary modules. Competences such as these are mapped to the requirements of professional bodies such as the Chartered Institute of Builders (CIOB) and the Royal Institution of Chartered Surveyors (RICS). Decision making Communication Information management Planning and Organization Work Management Quality Health and Safety Management Implementation of Sustainable Construction and Development Fulfillment of Business and Corporate Objectives Management of Personnel at Work The modules therefore cover the softer management skills required by professions and Construction Industry interdisciplinary teaching, such as large student numbers, different part-time/full-time degree paths, program length; 3, 4 or 5 years. Time restrictions, allowing all members of the group to meet at a given time, in a given place. Therefore, with this in mind, the main role of the tutor is to coordinate activities and develop a student-centred approach to learning. Staff are selected to lead each specialty area and provide students with subject-centered learning and support. In addition to this, the module tutor provides support to the student groups and forms a link between the client and the students. Part-time students were concerned that the group activity would be difficult to carry out due to their other work and study commitments. Hallam University staff achieved this by creating student groups part-time and providing a whiteboard area for group discussions and information sharing. Students praised the group areas on the Blackboard site, which they felt supported the group learning experience. It is important that the activity simulates a real project team and asks students to create a project file and produce meeting minutes. This not only allowed the tutors to monitor the professionalism of the group, but also provided a clear record of decisions and objectives. Students reported that this was a successful tool for creating and focusing the group, with the majority of students fully engaging with this activity. The first year interdisciplinary project is based on a simulated client brief, but the final year project is a number of people involved in the project, including a local councillor, 2 local government officials, 3 office workers, staff from the library and local residents. . This added to the complexity of the assessment, but most of the students responded well to the project. Student feedback noted that while this was different from their other studies, which were more theory-based, they enjoyed the challenge. 4. CONCLUSION As with any complex module, this complex presents a series of challenges, both for the personnel in charge of managing and coordinating the project, and for the students who carry out the module. Each year students and staff reflect on the modules. They consider and evaluate aspects of the project that have worked well and those that need further development. Thus ensuring that modules are continuously improved and remain current and relevant to professions and industry. Interdisciplinary teaching provides the best possible scope to enhance students' understanding of multidisciplinary work within the construction industry. Increases the employability of students through exposure to the work context. Provide students with the opportunity to critically reflect on their own experiences in the workplace and their knowledge from previous studies. By completing the modules, students are able to critically reflect on their overall development of teamwork and problem solving skills. And identify ways to develop skills in relation to working with other disciplines. Students often find group work challenging and staff must carefully manage and control the process to create a professional climate in which the individual can express their views. Student feedback on the senior module indicated that they were concerned about the random assignment of students (in alphabetical order), which added to the professionalism of the module. playin The students felt that the module had successfully replicated the interdisciplinary nature of the construction industry. The students also reported that they had a better understanding of the other professional roles as a result of taking the module. Many of the students said that they felt they had improved their communication skills and social media have crossed courses, which is good for the future of the industry. 49

86General feedback on this module was extremely good, with students feeling that it encapsulated their existing knowledge and allowed them to work in a professional context. Many reported that they had grown as individuals during the 12-week project and felt that it had prepared them for work in the industry. Staff have experienced support from their External Examiners in creating this module. Those External Examiners who work in an academic environment understand the complexity and problems that can arise in interdisciplinary group work. Those External Examiners who are in industry appreciate the practical nature of the module as preparation for the workplace. 5. REFERENCES [1] Fry, H., Ketteridge, S. & Marshall, S (1999). A manual for teaching and learning in higher education. London: Kogan Page [2] Hind, David and Moss, Stuart. (2005). Useful skills for work. Sunderland: Business Education Publishers Limited. [3] Fry, H., Ketteridge, S. & Marshall, S. (1999). A manual for teaching and learning in higher education. London: Kogan Page [4] Mullins, Laurie. (1990). Management and Organizational Behavior. London, Pitman Publishing. [5] Habeshaw, Graham and Gibbs, Trevor. (1989). Preparing to teach. Bristol: Technical and Educational Services Ltd. [6] Hind, David and Moss, Stuart. (2005). Useful skills for work. Sunderland: Business Education Publishers Limited. [7] Schwartz, Peter, Mennin, Stewart and Webb, Graham. (2001). Problem-based learning. London, Kogan Page Limited. [8] Mapping Institute of Construction. Record of Competency Achievements. [Online]. Last consulted on February 22, 2010 at: 50

87Triaxial inertial magnetic tracking in silent rest analysis using wavelet transform. A. Martínez-Ramírez 1, P. Lecumberri 1, M. Gómez 1, M. Izquierdo 2 1 Department of Mathematics, Public University of Navarra, 2 Center for Studies, Research and Sports Medicine, Government of Navarra, Pamplona (Spain) . 1. INTRODUCTION Patients with frailty often have loss of muscle strength, fatigue easily, and are at increased risk (and fear) of falling [1-6]. Real-time human motion tracking is an accurate, inexpensive, and portable system for obtaining kinematic and kinetic measurements and has particular applicability for monitoring disability in the aging population [7-12]. Using a triaxial inertial magnetic sensor, suitable for ambulatory measurements, we analyzed the output signals from a resting standing test with a healthy and frail population. Time frequency information based on wavelet decomposition was used to analyze all signals. The aim of this study was to examine the orientation signals and explore the orientation patterns of a triaxial inertial magnetic sensor suitable for ambulatory, silent standing test measurements in a healthy and frail population. 2. METHODOLOGY The sensor we used consists of three triaxial accelerometers and three gyroscopes integrated into a portable device suitable for ambulatory measurements. The MTx (Xsens Technologies B.V. Enschede, The Netherlands) is a small and precise 3DOF inertial orientation tracker. It provides drift free 3D orientation as well as kinematic data: 3D acceleration, 3D rate of rotation (velocity gyroscope) and 3D earth magnetic field. Therefore, the MTx is an excellent unit of measure to measure the orientation of the segments of the human body [13]. We develop custom software to manage the acquisition and processing of the signals provided by the MTx sensor with the LabVIEW graphic programming language. This software offers the possibility to measure some variations of the Romberg tests, as well as the get up and walk test, the gait test, a sit and stand movement test and a thirty second sit and stand repeated movement test. of duration. It provides 3D orientation, acceleration, Earth's magnetic field, angular velocity measurements, as well as real-time 3D linear velocity, force, and power calculations [13,14]. The software features real-time data visualization, signal analysis and control with various pre- and post-processing options, including wavelet analysis. In this investigation, we are concerned with evaluating four variations of the Romberg test. This simple test offers an important clue to the presence of pathology in the proprioceptive pathway and should be performed carefully during the neurologic evaluation. The patient stands with feet together and balances with eyes open and eyes closed in 2 different positions; with feet together and in semi-tandem (heel to instep). Twenty-five subjects from a frail population and twenty-four subjects from a healthy population volunteered to participate in this study. Our signals are non-stationary signals. The wavelet transform is one of the most powerful tools for non-stationary signal processing. It is well known from Fourier theory that a signal can be expressed as the sum of a possibly infinite series of sines and cosines. However, the big disadvantage of a Fourier expansion is that it only has frequency resolution and no time resolution. To overcome this problem, several solutions have been developed in the last decades that are more or less capable of representing a signal in time and frequency domain at the same time. The wavelet transform (or wavelet analysis) is probably the most recent solution to overcome the deficiencies of the Fourier transform [15,16,17]. The continuous wavelet transform (CWT) is a transformation on a wavelet basis space. The idea behind this transformation is to project the signal into different scaled and shifted versions of the so-called mother wavelet, an oscillating signal that only exists for a finite period of time. The CWT requires a large computational cost. This is the reason why the Discrete Wavelet Transform (DWT) is the most popular approach applied. The DWT implementation can be done by repeatedly filtering the signal with a pair of filters. Specifically, the DWT decomposes a signal into an approximate signal using a low-pass filter and a detail signal using a high-pass filter. The approach signal is then divided into new approach and detail signals. This process is carried out iteratively producing a set of approximation signals at different levels of detail (scales) and a final rough approximation of the signal. Fig. 1: Discrete wavelet transform. We tried different mother wavelets to calculate the wavelet indices and finally we chose Coiflet 5 which experimentally gave the best results. 51

88Coiflets are parent variations of Daubechies wavelets with vanishing moments for the wavelet function w(t) and the scale function (t). This wavelet allows a very good approximation of the polynomial function at different resolutions [18]. The absolute sum of the coefficients of details 1, 2, 3 and 4 were calculated using the DWT. Coefficients of approximation and detail of orientation cue samples taken from a frail subject and a healthy subject are given in Figs. 2 and 3 respectively. Roll Frailty Healthy Pitch Frailty Healthy Fig. 2: Wavelet decomposition of a sample orientation cue from a frail subject during standing semi-tandem balance tests with eyes closed. Yaw Frailty Healthy Fig 4: Box plot of detailed cues at levels 1 of 3D Orientation (roll, pitch, yaw) when subjects performed silent semi-tandem standing balance tests with eyes closed. Figure 4 and Figure 5 correspond to semi-tandem balance tests standing in silence with a duration of ten seconds with eyes closed. Fig. 3: Wavelet decomposition of a sample orientation cue from a healthy subject during semi-tandem balance tests standing quietly with eyes closed. 3. RESULTS The orientation signals were recorded in the form of Euler angles and decomposed into an approximation signal at level 4, and detail signals at levels 1, 2, 3 and 4. The approximation gives us information about the signal shape, while each detail corresponds to a frequency interval or time scale, reflecting vibrations in different frequency ranges [16,19]. Details 4 correspond to low frequencies and details 1 correspond to high frequencies. The box indicates the lower and upper quartiles with the center line showing the median. The upper and lower lines of the box represent, respectively, the medians of the upper and lower halves of the data, and the cat's whiskers represent the highest and lowest values ​​of the distribution, excluding outliers and extreme values. Outliers and extreme values ​​are also presented. Figure 4 shows the sum of the coefficients of the details 1. The healthy group showed significantly higher values ​​than the frail group in the orientation around each axis. Figure 5 represents the sum of the coefficients of the details 4. In this case, the frailty group showed significantly higher values ​​than the healthy group in the orientation around each axis. 52

895. REFERENCES Roll Pitch Yaw Fragility Fragility Fragility Healthy Healthy Healthy Fig. 5: Diagram of detailed signal boxes at levels 4 of 3D Orientation (roll, pitch, yaw) when subjects performed standing semi-tandem balance tests with Eyes closed. 4. CONCLUSION Accelerometry is a suitable method for monitoring subjects during their activities of daily living because it allows obtaining objective and reliable measurements without supervision at low cost. A wide range of measurements is possible, including checking balance during a calm position. The combination of accelerometers and gyroscopes provides us with information related to orientation and postural changes. The developed system can be used for occasional clinical assessment and, alternatively, is suitable for unattended long-term ambulatory monitoring, making it very useful for monitoring the elderly [20]. These results provide evidence for choosing the orientation and details of the wavelets of a fixed body sensor (ie, including accelerometer, magnetometer, and three-axis inclinometer) to identify conditions associated with stability losses. Differences in stability can be identified with orientation sensors, using wavelet decomposition. [1] W. M. Bortz II, "A Conceptual Framework of Frailty: A Review," Journals of Gerontology - Series A Biological Sciences and Medical Sciences, Vol. 57, pp. M283-M288, [2] L. P. Fried, C. M. Tangen, J. Walston, A. B. Newman, C. Hirsch, J. Gottdiener, T. Seeman, R. Tracy, W. J. Kop, G. Burke, and M. A. McBurnie, "Frailty in Adults older: Evidence for a phenotype", Journals of Gerontology - Series A Biological Sciences and Medical Sciences, vol. 56, pp. M146- M156, [3] J. E. Morley, "Mobility performance: A high-tech test for geriatricians", Journals of Gerontology - Series A Biological Sciences and Medical Sciences, vol. 58, pp, [4] K. Rockwood, D. B. Hogan, and C. MacKnight, "Conceptualizing and measuring frailty in older people," Drugs and Aging, vol. 17, pp , [5] D. M. Buchner and E. H. Wagner, "Prevention of Frail Health," Clinics in Geriatric Medicine, vol. 8, p. 1-17, [6] L. Ferrucci, J. M. Guralnik, S. Studenski, L. P. Fried, G. B. Cutler Jr., and J. D. Walston, "Designing Randomized, Controlled Trials Aided to Prevent or Delaying Functional Decline and Disability in Frail , Older People: a consensus report," Journal of the American Geriatrics Society, vol. 52, pp, [7] R. Moe-Nilssen, "A new method for assessing gait motor control in real-life environmental conditions. Part 1: The instrument," Clinical Biomechanics, vol. 13, pp, [8] R. Moe-Nilssen, "A new method for assessing gait motor control under real-life environmental conditions. Part 2: Gait analysis," Clinical Biomechanics, vol. 13, pp., [9] R. Zhu and Z. Zhou, "A real-time articulated human motion tracking using a three-axis inertial/magnetic sensor package," IEEE Transactions on Neural Systems and Rehabilitation Engineering, vol. . 12, pp., [10] R. Takeda, S. Tadano, M. Todoh, M. Morikawa, M. Nakayasu, and S. Yoshinari, "Gait Analysis Using Gravitational Acceleration Measured by Wearable Sensors," J. Biomech ., vol. 42, pp, [11] R. Takeda, S. Tadano, A. Natorigawa, M. Todoh, and S. Yoshinari, "Gait posture estimation using portable acceleration and gyro sensor", in 2009, pp [12] A. Weiss, T. Herman, M. Plotnik, M. Brozgol, I. Maidan, N. Giladi, T. Gurevich, and J. M. Hausdorff, "Can an accelerometer improve the utility of the Timed Up & Go test when evaluating patients with Parkinson's disease? Medicine. Ing. Phys., vol. 32, pp. 3, [13] A. M. Sabatini, "Quaternion-Based Strap Integration Method for Inertia Detection Applications to Gait Analysis," Medical and Biological Engineering and Computer Science, Vol. 43, pp., [14] H. M. Schepers, D. Roetenberg, and P. H. Veltink, "Ambulatory Human Motion Tracking Using the Fusion of Inertial and Magnetic Sensing with Adaptive Actuation," Medical and Biological Engineering and Informatics, Vol. 48, pp, [15] Mallat, S. and Zhong, S., "Multiscale edge signal characterization", IEEE Transactions on patern analysis and machine intelligence 14(7), [16] Mallat, S.G., "Theory for Decomposition of Multiresolution Signals: The Wavelet Representation", IEEE Transactions 53

90On Pattern Analysis and Artificial Intelligence, Vol. 11, no. 7, pp., [17] N. Bidargaddi, L. Klingbeil, A. Sarela, J. Boyle, V. Cheung, C. Yelland, M. Karunanithi, and L. Gray, "Wavelet-based approach to estimation of the Posture transition using a waist worn accelerometer", in 2007, pp [18] S. -. Huang and C.-. Hsieh, "Wavelet Coiflet Transform Applied to Inspect Signals Generated by Power System Perturbations," IEEE Trans. Aerospace electron. Syst., vol. 38, pp , [19] Rioul, O., Vetterli, M., "Wavelets and Signal Processing", EEE Signal Processing Magazine 8 (4), 14 38, [20] B. Najafi, K. Aminian, F. Loew , Y. Blanc and P. A. Robert, "Measuring Stand-Sit and Sit-Stand Transitions Using a Miniature Gyroscope and Its Application in Fall Risk Assessment in the Elderly," IEEE Transactions on Biomedical Engineering, vol. 49, pp , ACKNOWLEDGMENTS Funded in part by a grant from the Ministry of Health, Spain (Aging and Fragility Network RD06/0013/1003) 54

91Complex Event Processing in Power Distribution Systems: A Case Study Debnath Mukherjee Deepti Shakya Prateep Misra Tata Consultancy Services Limited Plot A2, M2 & N2, Sector V, Block GP Salt Lake Electronic Complex, Kolkata, India {debnath.mukherjee, deepti.shakya, prateep.misra}@tcs.com Abstract Complex Event Processing (CEP) is an emerging discipline. This document focuses on the application of CEP for fault detection and classification in an 11 kV radial distribution system using data collected from a Phasor Measurement Unit (PMU). The analysis has been carried out by monitoring the electrical magnitudes in the simulated 11kV radial distribution system using Matlab Simulink. The PMU is placed in the substation and transmits data to the Command and Control Center. The data is analyzed to identify the signatures of different types of failures and, based on that, rules have been designed to categorize them. An architecture stack based on a commercial product CEP (Tibco Business Events) has been designed to implement failure detection. In this article we present the architecture, simulation of the 11kV distribution system and share our experience on fault detection. The document describes the categorization and analysis of various types of faults such as Single Line to Ground Fault (SLG), Double Line to Ground Fault (DLG) and Three Phase (3Φ) fault using CEP software. Therefore, it shows a real life application of the CEP software for failure classification with low computational time. Keywords Complex event processing, phasor measurement units, architecture, simulation, fault detection, fault classification 1 Introduction Complex event processing (CEP) is an emerging technology that can be used to detect interesting patterns among events that occur. receive in real time. There are two related fields: event stream processing is concerned with the time-ordered sequence of events. CEP has a broader scope that includes a set of partially ordered events known as event clouds [1]. In this article, we describe how CEP can be used for fault detection and classification in an 11 kV distribution system using PMU. Fault detection and elimination is important to any network from a security and reliability perspective. In particular, we describe techniques that can be applied to distribution systems that monitor phasors using Phasor Measurement Units (PMUs). The unique contribution of this work is efficient fault detection and categorization using CEP technology. 2 Related work Current fault detection technology is based on current transformer based relays. Relay coordination ensures that the impacted area is minimal. The disadvantage of this is that the relay settings and the accuracy of the transmission can differ, which sometimes does not provide optimal debugging. The authors are not aware of any reported categorization and failure detection work based on CEP technology. CEP platforms for phasor data concentrators and stream processing have been reported in [2]. However, it does not mention the specific use of CEP for fault detection and classification. The analysis of the cause of the failures and the location of the failures is being considered in future works. 3 Phasor Measurement Units PMUs are widely used in transmission networks for wide area protection and monitoring applications, voltage instability analysis and prediction. Properly designed PMUs can also be integrated into distribution level IEDs (intelligent electronic devices). The document examines such distribution level PMUs for fault detection and classification. 4 Problem Statement The problem under consideration is the detection of occurrence and type of failures in the 11kV radial distribution system. This distribution system is simulated using Matlab Simulink. A 13-bus 3-phase balanced radial distribution system is simulated in which a PMU is placed on bus 1. The PMU is simulated using the 3-phase measurement block from the Simulink library. Appendix A shows the details of the system. Table 4 mentioned in appendix A shows the line and load data for the test system. The system is simulated at a frequency of 60 Hz. In this simulation, different types of faults are created at different locations. On bus 1, three-phase current and voltage waveforms are sampled with a sampling frequency of 1 kHz, in the form of magnitude and phase angles. The PMU sends this data to the Command and Control station where it is processed for fault detection and classification using the CEP software. The format of the transmitted synchronized data follows the standard mentioned in IEEE Standard C [3]. PMU aggregates the signals i.e. three-phase voltage and current in terms of magnitude and phase angles. This data is then sent to a command and control station where it is processed for fault detection and classification using CEP software. The format of the transmitted synchronized data is similar to that mentioned in [3]. The captured data is then used to detect the type of failure that occurred in the distribution system. 5 Use of PMU to detect faults Let A, B, C and G be phases a, b, c and ground. The different types of faults that can occur in the distribution system are: Single Line to Ground (SLG) Fault, i.e. AG, BG and CG. Line-to-line (L-L) fault, i.e. AB, BC and CA. Double line-to-ground fault (DLG), that is, ABG, BCG, and CAG. Three-phase fault (3L), that is, ABC. 55

92Whenever any of these faults occur in the system, the voltage and current of a phase will show a change in terms of decreasing and increasing in magnitude from their normal values, respectively. If the fault occurs, the phase angle for that particular voltage also shows a variation from its normal value. Similarly, the phase angle for current in that phase may also show changes from its normal value. These changes can be used as key parameters for failure classification. According to [4], thresholds for these parameters have been decided for failure classification. For the voltage, if 1.1pu > V > 0.8pu, then the system is in a normal state; otherwise, if V < 0.8pu, then there is a fault in the system. otherwise, if V > 1.1pu in the system. then switching of capacitors or transients occurs where V represents the magnitude of the phase voltage and pu represents per unit. Therefore, when the system is running under normal conditions, the voltage will be less than 1.1 pu and greater than 0.8 pu. When a fault occurs, the magnitude of the voltage will be less than 0.8pu. The voltage magnitude shows changes for low fault resistances, but if there is a high resistance fault in the system, the voltage magnitude may remain in the normal band, ie greater than 0.8pu; but the current for that particular phase will show a noticeable change from its normal value. Similarly, the phase angle shift for voltage is considered normal for a shift of ±1 degree, while for current, a phase angle shift of ±10 degrees is acceptable. Two phase A SLG fault case studies are discussed in this article. When an A phase SLG fault occurs in the system, the magnitude and phase angle for the A phase voltage and A phase current A will show changes from their normal values. This change depends on the severity of the failure occurrence. The other two phases, that is, phase B and C, will also undergo changes, but the values ​​will remain within a normal range. Two cases of failure SLG, that is, AG, are considered. These cases analyze the change in voltage as well as the current waveform monitored by the PMU located on bus-1 with respect to the change in fault resistance as well as the change in fault location. The first case shows the variation in the voltage and current parameter for phase A with respect to the change in the fault resistance from low to high value at a particular location, ie on bus-25. While the second case shows changes in the voltage and current parameter for phase A measured on bus 1 due to a change in the location of the fault, that is, on bus 21 and bus 31. Case I: Study of the effect of the variation in the fault resistance at the end of bus-25 of the system shown in Appendix A. This case considers that the AG fault occurs at the end of bus 25. Faults with different resistances are created at this location. fails and its effects are analyzed in bar 1 using PMU. Table 1 shows the variation in the three-phase voltage and current data monitored by the PMU on bus 1, with the change in the fault resistances. It is observed that for low fault resistance the magnitude of the voltage is less than 0.8p.u. but as the failure resistance increases, it has come to exceed this value. Therefore, for high-resistance faults, the voltage magnitude remains within the normal range. Table 1 also shows the observed changes in phase angle (Φ VA ) for phase A for low and high resistance faults. The observed change depends on the instant of occurrence of the fault as well as the resistance of the fault. Therefore, Table 1 shows that the phase angle for phase A voltage changes even for high fault resistances, but remains within normal limits since this parameter depends more on the instant of fault occurrence than on the severity of the failure. Figure 1. show the variation in the magnitude of the triphasic voltage when high resistance faults occur at the end of bus 25. Voltage (pu) Time (sec.) Fig.1. Three-phase voltage waveform when an AG fault occurs in an instantaneous second and clears in seconds with a 50 Ω fault resistor at the end of the bus25. Table 1. Magnitude and Variation of Phase Angle in Phase A Voltage and Current with Respect to Fault Resistance When SLG Occurs at Bus 25 Fault Resistance (ohm) Voltage (V A) (pu) Phase Voltage (Φ VA) (degrees) Current (I A) (pu) Phase current (Φ IA) (degrees) Current (p.u)) Time (sec.) Fig.2. Three-phase current waveform when an AG fault occurs in an instantaneous second and clears in seconds with a 50 Ω fault resistor at the end of the bus25. Similarly, the three-phase current is also monitored on bus 1 using the PMU. Table-1, shows the changes in magnitude as well as the change in 56

93its phase angle for the observed phase A current on bus 1, with respect to the change in fault resistance. It is observed that the magnitude of the A phase current shows a significant change even for high resistance faults. Similarly, its phase angle also shows changes from its normal value for high and low resistance faults. In Fig.2. The A phase current shows an abrupt change from its normal value for the AG fault, while the currents of the other two phases remain within the normal range. Similar studies can be done for double line-to-ground faults, line-to-line faults, and three-phase faults. Case II: Study of the variation in the location of the fault as well as the resistance of the fault at the end of bar-21 and bar-31. The AG fault is simulated at two different locations. A fault was first simulated at a distance of 2.5 km from the monitoring end, that is, at the end of bus 21, and then removed. Then a second fault was simulated at 11 km, that is, at the end of bus 31. Table 2 and Table 3 show the change in voltage and current parameters for phase A monitored on bus 1. Table 2. Magnitude and variation of phase angle in phase A voltage with respect to change in fault resistance, as well as fault location when SLG occurs at bus 21 and 31 Fault resistance (ohms) Voltage (V A1) (pu) Phase voltage (Φ VA1) (degrees) Voltage (V A2) (pu) Phase voltage (Φ VA2) (degrees) Table.2. shows the comparison between the change in voltage magnitude and its phase angle at buses 21 and 31 with respect to the increase in fault resistance for the SLG fault on phase A. Let V A1 and V A2 be the voltage magnitudes monitored when the failure occurs at the end of the bus. 21 and 31 respectively. Similarly, Φ VA1 and Φ VA2 represent the phase angle for the phase A voltage, respectively. Since it is an AG fault, the other two phases may undergo a change from their normal values ​​in terms of magnitude and phase angle but within the normal range. It is observed that for low fault resistances, the voltage drop is more severe (as measured by PMU) when the fault occurs at the 21 end of the bus compared to the fault that occurs at the 31 end of the bus. However, if a high resistance fault occurs at these two locations, the voltage magnitude for phase A remains greater than 0.8p.u as measured by the PMU. Considering that each fault condition occurs at the same instant at both locations, the change in phase angle for voltage A decreases with increasing fault resistance. It is also observed that for the same fault resistance, the change in the phase angle (Φ VA) decreases as the fault location from the monitoring bus 1 increases. Table 3. Magnitude and variation of the phase angle in the phase current A with respect to the change in fault resistance, as well as the location of the fault when SLG occurs at bus 21 and 31 Fault resistance (ohm) Current (I A1) (pu) Phase current (Φ IA1) (degrees) Current (I A2) (pu) Phase current (Φ IA2 ) (degrees) Table.3. shows the current magnitude and its phase angle variation for phase A only when the AG fault occurs at two different locations. Let I A1 and I A2 be the current magnitudes monitored by the PMU on bus 1 due to faults located at the end of bus 21 and 31 respectively. Similarly, Φ IA1 and Φ IA2 represent the change in phase angle for phase current A, respectively. It shows that I A1 and I A2 show severe low-strength fault changes at both locations. However, it is observed that for faults of the same resistance, I A1 will be of greater value than I A2. Therefore, the change in the magnitude of the current is more prominent when the fault occurs close to the monitoring location and its magnitude reduces as the distance of fault occurrence from the monitoring end increases. It was observed that at two locations, i.e., on bus 21 and bus 31, the PMU placed on bus 1 registered a comparatively smaller magnitude change for I A1 and I A2 when the high resistance failure occurred as compared to the low resistance failure. Similarly, the change in phase angle (Φ IA1 and Φ IA2 for current shows variation with respect to the change in fault resistance as well as fault location. The severity decreases with increasing distance of the fault location with respect to bus 1, as well as with the variation in fault resistances.Similarly, cases can be studied for double line-to-ground fault, line-to-line fault, and three-phase fault. high resistance, the magnitude of the three-phase voltage can remain in the normal limit.In that case, the magnitude of the three-phase current can be observed for fault classification.Therefore, if the magnitude of current for one phase is greater than the defined normal limit, then it indicates an SLG fault for that particular phase.Similarly, if the current magnitude of two or three phases is greater than the normal limit, its DLG fault or three phase fault respectively.The phase angle change for the three-phase voltage and current can be examined to obtain additional information. These parameters can be observed to know the instant of occurrence of the fault. If the fault occurs at the zero crossing, the phase shift will be close to zero; on the other hand, this change will be maximum if the failure occurs at the peak value. These values ​​can also be used to distinguish between DLG fault and L-L faults. 57

946 Solution This section is organized as follows: Section 6.1 provides an overview of the solution and Section 6.2 onwards provides details on the different modules of the solution. 6.1 Solution Overview In this section, we discuss the solution architecture for implementing fault detection using PMU. The architecture diagram of the solution is shown in Figure 3. The PMUs are located in the substations from where their messages are communicated to the command and control station (CCS). The latter receives messages from multiple PMUs and processes them to determine if there are any failures. The message sent from the PMU has the following fields (only those required for fault detection are shown): 1. Voltage magnitude and phase angle for each of the phases, 2. Current magnitude and phase angle for each one of the phases. 3. Measurement timestamp From these message attributes, the pattern that a failure has occurred should be detected. The pattern for a single line to ground (phase A to ground) fault is shown below. This is known as Run to Completion at Tibco Business Events. One class of CEP software, such as Tibco Business Events [5,6], includes rule-based inference engines that are enhanced to support event streams. Tibco Business Events receives event streams via the Tibco Rendezvous bus software that allows event producers to submit events. Some CEP programs, such as Esper, Streambase, and others, use continuous queries to detect complex events. Streaming queries are SQL-like queries that register with the CEP server before the real-time data arrives and act on the real-time data producing results with low latency. Some programs, like Tibco Business Events, offer both queries and rules. For this problem, the Tibco Business Events CEP engine was used. Each fault type was assigned a business event rule: there were separate rules for line A to ground, line B to ground, line C to ground, double line to ground ground and three-phase fault. The architecture diagram of the solution is shown in Figure 3. Display (IF the line-to-neutral voltage for phase A < 0.8 pu OR IF the phase A current exceeds the current threshold) AND IF the magnitudes voltage and current of other phase (B,C) are within normal limits THEN IT IS A SINGLE LINE-TO-GROUND FAULT ON PHASE A The logic consists of two parts: Part 1 implements the logic that line A voltage is below normal threshold OR line A current is above normal threshold. Part 2 implements the logic that other parameters related to phase voltage and current (B and C) are within normal limits. The two parts are joined by an AND operator. Similar rules hold for double-line-to-ground faults and three-phase faults: for double-line-to-ground faults, two of the phases will show abnormal characteristics, and for three-phase faults, all three phases will show abnormal characteristics. The above logic can be mapped to rule based systems. A rule consists of two parts: an IF part (also known as LHS) and a THEN part (known as an action). The IF part contains a condition that, when met, triggers an action that is contained in the THEN part. Rules are evaluated in loops: a series of input conditions can be met in the first loop which then triggers actions that can change the value of some variables that are considered in the IF part of some rules. In the next loop, some more rules may be triggered because some variables have been changed as mentioned above (in some cases, the same rules that were already triggered before may be triggered again if the variables in the condition part of the rule have different values ​​than before as a result of executing some actions). In this way, the evaluation is done in cycles until there are no new rules Substn CEP Server Bus Data Collector Substn Fig.3: Fault detection architecture Command and control station Substations The system architecture consists of the following building blocks: At the substation level, a PMU sends a message to the command and control center At the command and control station, the message from the PMU is sent to a data collector which is a standalone server program The data collector packages the message from the PMU as an event and sends it to the server CEP (Tibco Business Events in this case) through the message bus (Tibco Rendezvous). The CEP server runs the rules and sends alerts to the screen and also to the pager/mobile. 58

95It also generates events that are sent to external systems that can take automated actions based on the event. In the following subsections we explain the various components of the architecture. 6.2 Data Collector The Data Collector is a Java program that adapts the input signal from the PMU to a suitable format for transmission to the CEP server via Rendezvous. The data collector takes the input signal from the PMU and converts it to the event format. The format of the event is the same as that of the PMU message: composed of voltage, voltage phase angle, current and current phase angle for each of the phases and the time stamp. The formatted data is sent to the Tibco Rendezvous bus in the Tibco specific format. Bus Rendezvous is a daemon that runs on every machine that participates in distributed computing. While in the prototype both the data collector and the CEP server were hosted on the same computer, if the data collector and the CEP server are hosted on separate machines, the Rendezvous daemon must be running on both machines. 6.3 CEP Server The CEP server hosts the rules for failure detection. There are three rules for line-to-ground faults (for all three phases), three rules for double line-to-ground faults (for AB-to-ground, BC-to-ground, and CA-to-ground faults), and one rule for the case of a three-phase fault. See Section 6.1 for the structure of the rules. One of the requirements of the CEP Server was that the screen would not show the same type of failure if it occurred in consecutive cycles; and the display should indicate when a fault is cleared. We discovered that this required a Concept object to be used. We now explain Business Events Concepts and how it was used in our prototype. A concept in Business Events is similar to a class in object-oriented theory, except that you cannot define methods on it. A Concept can have attributes and can inherit from other Concepts. Concepts can be instantiated just like classes can be instantiated in a similar way to objects in object-oriented design theory. A Concept called PMU was defined with two fields: a Boolean FAULT field (which is true if a fault occurred in the last cycle) and a String FAULTTYPE field (indicates the type of fault observed in the last evaluation). An instance of this Concept was initialized for each PMU during application startup. Using this concept and its two fields (which contain the status of the PMU) the requirement that the display should not show the same type of failure if it occurs in consecutive cycles was met. Also, if no failure occurred, a no-failure rule was designed to reset the Concept instance fields and also indicate when a previous failure is cleared. 6.4 Display The design can support both command line output and SCADA (Supervisory Control and Data Acquisition) human-machine interface. 6.5 Solution Prototype A prototype of the solution has been built. A MATLAB simulation was used to generate the data needed for failure classification. An 11kV balanced distribution system with a radial feeder is simulated using Matlab Simulink. In the simulation, a PMU is placed on bus 1 to monitor the three-phase current and voltage waveforms. The PMU is simulated using the Simulink library. Three types of fault (Line A to Ground, AB to Ground, and Three Phase) were simulated with the change in fault resistance as well as the fault location. These fault conditions cause a change in phase voltage and current in terms of magnitude and phase angle. This change is captured by the PMU on bus 1. The data is sampled at a frequency of 1000 Hz. The per-cycle average is then calculated for the voltage and current waveform, outputting one data sample per 60 Hz cycle. The data collector read the data files generated by MATLAB and sent them to Rendezvous, from where Tibco Business Events collected the events and generated the faults and fault outputs. The prototype has been designed keeping in mind that network communication will be used to transfer data between the PMU and the data collector. 7 Results and experience The experience of using MATLAB to simulate failures is explained in Section 5. The key results and experience of using the CEP software are explained below: How the use of rules simplified development and allowed us to develop a prototype in In a short time, the CEP software demonstrated two features that simplified development: 1) Defining events and concepts using an easy-to-use IDE made solution development easy. 2) The presence of rule definition capabilities facilitated the development of the solution. Rules can also be defined by non-technical business users. Some amount of state maintenance is required unlike traditional rules approaches. The simple use of rule engines is not enough for this problem. A certain amount of state must be maintained on each PMU, preferably in memory, as explained above (see Concepts in Section 6.3). Simplicity of the messaging framework during the connection setup phase The CEP server was integrated with the Rendezvous messaging infrastructure. We discovered that the Rendezvous messaging API was simpler than standardized APIs like Java Message Service (JMS) during the setup phase, because compared to getting a connection, a connection factory, and a session, there was only one call to Tibrv.open(). 59

96The performance of fault detection using the CEP server We carried out a performance analysis of the fault detection system. The data collector entered crash data at a high rate to the CEP server. Each failure was detected in less than a millisecond, failures were detected in 1422 milliseconds, giving an average failure detection time of milliseconds per failure. The actual execution time of the error detection rule is expected to be less than ms because ms includes the time it takes for the client program to publish to the Business Events engine via Rendezvous. The hardware used was a 2.33 GHz Intel Core 2 Duo CPU with 2 GB of RAM. 8 Future Work In this work, only one PMU is placed in the distribution system. This work can be extended by optimally placing more PMUs in the distribution system to increase the reliability of the solution. This fault classification algorithm can be further extended to the fault location algorithm. Line-to-line faults are not covered in this document and will be considered in future work. Actual work is based on a set voltage and current level for fault determination. However, features can be incorporated to detect continuous low voltage conditions and overload conditions in the network and distinguish them from high resistance faults. A radial network is assumed. The algorithm could be modified to accommodate locally generated feeds, alternative switching paths. 9 Acknowledgments The authors wish to acknowledge the contributions of Sumit Kumar Ray, Narayanan Rajagopal and Ranjeet Vaishnav of TCS for their valuable contributions to this work. 10 REFERENCES [1] David Luckham, The Power Of Events: An Introduction to Complex Event Processing In Distributed Enterprise Systems, Addison Wesley [2] Downloaded February 24, 2010 [3] IEEE Standard for Synchrophasors for Power Systems, IEEE Power Engineering Society, pp. 1-57, Mar 2006 [4] IEEE Recommended Practice for Power Quality Control, Report of the IEEE Standards Coordination Committee, pp. 1-70, June 1995. [5] TIBCO Software Inc., Tibco Business Events User Guide [6] TIBCO Software Inc., Tibco Business Events Language Reference BUS 28 BUS 27 L 27 BUS 29 BUS 1with PMU BUS 20 BUS 30 BUS 21 BUS 31 BUS 24 BUS 22 BUS 23 Balanced distribution system Fig 11kV bus. Table.4. Line and load data for the distribution system End Buses of Lines Bus x Bus y Line Length (kms) Real Power (kw) BUS 25 Load on Bus and BUS 26 Reactive Power (kvar) ANNEX-A Data corresponding to practice 13 Bus The distribution system is tabulated in Table 4. basic kva: 1000; Base kv: 11 Type of conductor: ACSR Line resistance: p.u/km Line reactance: p.u/km 60

97A Conceptual Framework for Project Engineering Success Rodger Oren, PhD TennCare Office Nashville, TN, USA 61

98ABSTRACT This article analyzes the framework of project engineering and the success of project management in the new millennium. The model evolves from a project-focused vision to a contemporary, holistic, and integrated collection of factors in the context of the organization, the individual, and the project. The conceptual framework provides the context for future research, as it describes the evolution of project success constructs from the last century to the present. The work contributes to the field by looking at the organization through the lenses of the company's culture, its politics, and its organizational structure. Furthermore, the framework presents the variable of witnessing, or the ability to anticipate the future, as a necessary element in the successful project of the 21st century. Keywords: project management, culture, politics, leadership, triple constraint and project success. 62

991. INTRODUCTION Professionals in the project engineering/project management field continue to be plagued with failure in most of their projects, even as the understanding and use of project engineering management techniques increases (Johnson, [6]; Kerzner, [8]). Projects are behind schedule, over budget, or unable to meet original scope requirements. The century may be new, but the project engineering/project management song remains the same. The Wall Street Journal [2] reported on a natural gas pipeline project, from Colorado to Ohio, that cost $6.7 billion, a 50% cost overrun, affecting return on investment. In addition, the scope has not been met; the amount of natural gas to be delivered through the pipeline appears to be less than initially estimated, due to recent natural gas discoveries in the Northeast US (Davis, [2]). With the current global recession, it seems that project management failures are on the rise. About 68% of projects run into some kind of problem that generates a failed result (Levison, [9]). Success in the projects of the new millennium seems to be as elusive as ever. Organizations and their employees encounter considerable levels of change, which can be characterized as chaotic, creating a situation in which the company must learn and adapt to survive (Lichtenthaler, [10]). Scharmer [16] raised the need for a person to anticipate the future as he emerges, as a technique to lead the necessary change in contemporary society. The challenges for the organization require that the culture make sense of the situation, in order to provide some refuge to the participants in this environmental storm facing the company (Ravasi & Schultz, [15]). The academic and professional communities have attempted to understand how success is achieved within the context of the project. Academics have published books and articles on many engineering and project management topics, tools, and techniques. Practitioners have sought to learn from published materials and associations. However, the field continues to deal with more failures than successes. Are we missing something in the engineering and project management community? 2. WHAT ARE THE PROJECTS? Projects are recognized as systems, with interacting components that require attention to succeed (Kerzner, [8]). This thinking is a paradigm shift, moving from a mechanistic analytical approach to a holistic mode, where interacting interdependent variables are the name of the game (Gharadejaghi, [3]). As the system increases in complexity, its interdependencies increase, which describes the situation of the project manager in the current environment. Projects are more complex, interdependent, with many variables that define success. The environment is not just the project itself, but an effort within the organization, affirmed by Carden and Egan, [1], Work in other fields is being reviewed and evaluated for use within the profession, for example, how leadership research can be applied. to the design profession (Gehring, [4]; Turner and Muller, [18]). Projects are systems that are in the social arena, which is why they define themselves as a multi-mental social model (Gharajedaghi, [3]). Prabhakar's leadership research [14] sought to qualitatively review the transformational behavior of project managers as an element of project success. Neuhauser [13] found that "there are no conclusive findings on effective leadership styles in men or women in the project setting" (p. 22). While many in the field find leadership to be a factor in success, we find that the debate over leaders' styles, methods, and interactions continues to make the topic of leadership within the project management profession an item of study. continuous. As leadership factors are the subject of debate, more work on leadership as a stand-alone topic and leadership related to project management 63

100it will be necessary to find some agreement within the research community. As project managers attempt to achieve favorable results from their efforts, they must appreciate and consider external factors that will influence or moderate their results. Perceptive managers understand that the organization will contribute, either positively or negatively, to the result that is sought. How skillful project managers manage their behavior will add to the mix of elements that influence project success. Finally, the basic factors of the project itself will contribute to the success or failure of the tasks detailed in the plan that the project manager is executing. 3. THE CONCEPTUAL FRAMEWORK Projects now have an organizational impact, to a degree that was not present in the 20th century. Companies have been involved in reengineering projects, mergers and acquisitions, and other organizational change activities with low probability of success (Mourier and Smith, [12]). Project management has changed from a single project that is delivered within the organization, to multiple projects that are executed simultaneously, which requires the manager to understand organizational and team dynamics (Lientz and Rea, [11]). Project management requires an understanding of systems theory, since a number of factors interact and influence the production of the good or service expected from the effort (Kerzner, [8]). Systems are defined as a collection of interacting parts that together are greater than the sum of their parts (Gharajedaghi, [3]). Project managers must be systems thinkers, since "the key to success becomes managing the interaction between the different parts and not the parts themselves" (Gray and Larson, [5]). This statement is an example of the triple constraint where one needs to understand how cost, scope, and schedule interact in a way that each is modified by a change in the other parameter. However, the triple constraint is not enough in the new millennium. Business cycles are shorter, new unforeseen threats emerge, and companies see challenges where none existed before. Gharajedaghi [3] discussed how systems have evolved from older mechanistic concepts to interdependent variations, now to multi-mental social models consisting of determination and change in their outcomes. Projects reflect this archetype, a combination of individual and organizational project parameters, behaviors, and actions. The conceptual model, Figure 1 Conceptual framework of project success, describes the changes from the interactive mode of thinking to intentional and actor-driven representation. The framework describes how increasing project complexity, increasing organizational stresses, and the need for transformational change drive organizational, individual, and project factors in an interrelated combination that is necessary for management success. of projects. This combination is a shift from the quantitative approach to project management to an integrated holistic approach, which Kerzner [7] defined as comprised of behavioral components necessary for success in contemporary settings. By detecting organizational factors, the project manager uses her experience and leadership skills with project factors to produce the desired result. The frame visually defines the evolution of the frame from the 20th century to the 21st century. Whereas factors once stood alone in the 20th century, today the interaction between organizational, project, and individual factors occurs to a high degree. This interaction is documented as an overlapping gray area of ​​the boundaries of the three factors. The intersection of the variables describes the mix of t where each parameter within the factor limit can contribute to the project outcome. 64

101Figure 1 - Conceptual framework of project success 4. CONCLUSION Although project success factors have been studied in previous situations, this paper presents a different view of the project outcome process. The work seeks to investigate how integrating factors combine in a way that produces a desirable result. Some project work has started to look at individual factors; however, no work has sought to integrate some of the individual variables presented with the presence variable discussed by Scharmer [16]. Furthermore, no work to date considers an organizational vision that blends culture with the political and structural views presented in Tichy's work [17]. The presented framework will be investigated in a variety of methods, using both quantitative and qualitative approaches, in order to determine its value to the profession. Whether or not the model proves its worth as a picture of reality, the development of a new view, with built-in variables, will help the research community on other things to consider when looking at the world of the project manager. The results of the research will help the professional community to leverage their efforts on the variables and factors that produce the greatest benefit. Future work should consider the project as a holistic system, living in the organization made up of people, guided by the leadership of a project manager. Sixty-five

1025. REFERENCES [1] L. Carden and T. Egan, Does our literature support newer sectors in project management? The search for quality publications related to Project Management Journal, Vol. 39, No. 3, 2008, pp [2] A. Davis Huge pipeline delivers billions to cities along its route The Wall Street Journal, vol. 254, no. 80, 2009, p. A3. [3] J. Gharajedaghi, Systems Thinking: Management Chaos and Complexity, New York: Butterworth-Heinemann, [4] Applying Trait Theory from l Project Management Journal, Vol. 38, No. 1, 2007, pp [5] C. F. Gray and E. W. Larson, Project Management: The managerial process (4th ed.), New York: McGraw-Hill Irwin, [6] J. Johnson, My Life is a Failure , West Yarmouth, Massachusetts: The Standish Group International, [7] H. Kerzner, Advanced Project Management (2nd ed.), Hoboken, NJ: John Wiley and Sons, [8] H. Kerzner, Project Management, a System Approach Planning, Scheduling, and Control (9th ed.), Hoboken, NJ: John-Wiley and Sons, [9] M. Levison, Recession cause Computerworld. Retrieved on August 29, 2009 from _Rates [10] U. Lic Absorptive capacity, environmental turbulence and complementarity from Academy of Management Journal, vol. 52, No. 4, 2009, pp [11] B. P. Lientz and K. P. Rea, Project Management for the 21st Century (3rd ed.), 2002, Retrieved October 7, 2009, from [12] P. Mourier and M. Smith, Conquering Organizational Change: How to Succeed Where Most Companies Fail, 2001, Retrieved October 7, 2009, from [13] C. Neuhauser, Project Management Leadership Behaviors and Frequency of Use by Project Feminine Project Management Journal, Vol. 38, No. 1, 2007, pp [14] Switch Leadership in Projects: an empirical study reflecting the importance of transformational leadership in the success of projects through Twenty-Project Management Journal, vol. 36, No. 4, 2005, pp [15] D. Ravasi and Responding to organizational identity threats: exploring the role of Academy of Management Journal, vol. 49, No. 3, 2006, pp [16] Presence: Learning the future as it emerges: On the unspoken dimension of leading the Conference on Knowledge and Innovation, Helsinki, Finland, 2000, pp [17] N. M. Tichy, Managing the Strategic Change: Technical, Political and Cultural Dynamics, New York: John Wiley & Sons, 1983 [18] J. R. Turner and The Project Manger's Leadership Style as a Success Factor on Projects: A Literature Review Project Management Journal, vol. 36, no. 2, 2005, pp.

103The third mathematical relationship between the Reynolds number of particles and the ripple factor using data from the Tapi River, India. Dr.S.M.Yadav,Associate Professor,CED,SVNIT,Surat and Dr.B.K.Samtani,Professor,CED,SVNIT,Surat and Dr.K.A.Chauhan,Associate Professor,CED,SVNIT,Surat ABSTRACT The calculation of the bed load allows the fact that only a part of the shear stress is used for sediment transport and part of the shear stress is wasted in overcoming resistance due to bed forms, therefore the total shear stress developed in the open channel requires shape correction of correction factor called ripple factor. Different methods have been followed to correct the actual shear stress in order to calculate the sediment load. The correction factors are based on the particular characteristics of the grain size of the particles. In the present work, the ripple factor for non-uniform bed material has been obtained considering various variables such as flow, average hydraulic depth, flow velocity, bed slope, average particle diameter, etc. by collecting field data from the Tapi River for 15 years for a particular gauging station. The curl factor is obtained using the Meyer Peter and Muller formula, Einstein's formula, Kalinske's formula, Du Boy's formula, Shield's formula, Bagnold's formula, the average of six formulas and the analysis of multiple regression. The variation of the undulation factor with the Reynolds Number of particles is studied. The ripple factor obtained by different approaches is further analyzed using the Origin software and by performing a multiple regression on the 15 years of data with more than 10 parameters, the ripple factor was obtained by multiple regression. These values ​​are further analyzed and a power shape relationship has been developed giving a statistical mean to the parameters. The ripple factor increases with increasing value of the Reynolds Particle number. The large deviation is seen in the case of the Kalinske approach when compared to other approaches 1. INTRODUCTION Sarangkheda is one of the measuring stations on the Tapi river. In this document, data from the last 15 years collected from this gauging station is used to calculate the ripple factor. The ripple factor is calculated for the monsoon season. Several approaches are used to calculate the ripple factor, but six approaches are used in this document to calculate the ripple factor. The field data of 15 years have been analyzed and computer programming in MS-excel and origin has been used to carry out the data analysis. The relationship between the ripple factor and the Reynolds number of particles has been established. Graphs are plotted for the above parameters using the Origin software and a statistical analysis of the obtained results is carried out. 2. OBJECTIVES The main objectives of this work are: (i) Using various measured parameters to determine the ripple factor. (ii) Develop a mathematical model that relates the Reynolds number of particles and the ripple factor. 3. AREA OF STUDY AND DATA COLLECTION Tapi is the second largest west-flowing river in mainland India. The total length of the river is 724 km from its source to the Arabian Sea. The Tapi basin (fig.2) is located between latitudes 200 N to 220 N, 80% of the basin is in Maharashtra and the rest in the state of Madhya Pradesh and Gujarat. The Central Water Commission, Tapi Division, Surat, regularly collects daily discharge and sediment data at the Sarangkheda measurement site on the Tapi river (fig. 3). Sarangkheda is situated at a distance of about 488 km from the origin. Daily data during monsoon are collected over a 15-year period from 1981 to 1995 and bed load data (seasonal) are collected for study from central water commission data books ( , ) 4. OBSERVATIONS DISCHARGE AND SEDIMENT Discharges are observed once a day at 08:00 hours at all sites and calculated by area-velocity methods. The cross section is divided into 15 to 25 segments according to S1192:1981. Depths are measured with sounding rods according to IS 3912:1966. Necessary wet line and air corrections are made per IS 1192:1981. Velocity is measured by cup-current type meter according to 3910:1966 Suspended sediment samples are collected in Punjab bottle samplers at a depth of 0.6 D from the water surface. Reynolds number of the particle The Reynolds number of the particle is related to the velocity of fall of the particle. 67

104= wd S ν Where, w is the velocity of fall of the solid particle with diameter D S. This dimensionless number is widely used in the study of the dynamic properties of submerged solid particles and in resistance relationships. Ripple factor The bed load calculation allows for the fact that only part of the shear stress is used for sediment transport and part of the shear stress is wasted to overcome resistance due to bed shapes, therefore, the total shear stress is developed in the open channel. it requires correction in the form of a correction factor called the ripple factor. Meyer Peter and Muller formulas (1984) In the Meyer Peter and Muller formulas they can be expressed as follows in their basic form Basic form X = 13.3 (Y ) 3/2 (2) The ripple factor suggested by the Meyer Peter and Muller formulas are given below Ripple factor µ = (C/C ) 3/2 C based on D 90 The characteristics of the above equation are discussed below Grain size characteristics D = D i = Σ (p i /D i ) Σp Characteristics: The formula is mainly of an experimental nature. The experiments were carried out for D >= 0.4 mm for cases I in which the suspended load was absent. The formula has been extensively tested and used for rivers with thick bed material. Table 1.1 presents the approach, concept and characteristics of the Meyer Peter formulas. Relationship based on estimated and calculated parameters The method used for the estimation of the ripple factor depends on many variables. The seasonal average values ​​of these parameters calculated by the six methods are based on different approaches. It is noted that the six methods used for the study are well known, widely used and involve highly important hydraulic parameters and sediment parameters. Therefore, the ripple factor comparisons calculated to develop the sediment transport characteristics of this river are compatible. Taking the above fact into account, the average of six methods can be used as the basis without loss of precision. Therefore, the mean values ​​of the ripple factor, estimated by six methods, are considered a reliable basis for comparison of the collected data and the relationship between the ripple factor and miscellaneous. dimensionless parameters are set. Multiple Regression Analysis A multiple regression analysis is carried out between the basic measured and calculated data, namely bed width discharge per unit width, flow area, mean hydraulic depth, flow velocity, bed slope, average. sediment diameter with calculated mean values ​​of q bv,, µ. Finally, the equations are derived as follows using tables 4.1 to 4.12 for each river. µ. multi = C + C 1 x q + C 2 A + C 3 x S + C 4 x B + C 5 x V+ C 6 HMD + C 7 Da. (3) µ. multi = bed head discharge in volume terms on a submerged weight basis q bw = is the bed head discharge under consideration on a submerged weight basis. q = discharge per meter width A = cross-sectional area of ​​the flow S = slope of the bed B = width of the stream bed V = velocity of flow HMD = mean hydraulic depth (1) 68 Da = diameter of sediment C, C1 , C2, C3, C4,C5, C6, C7 are multiplication constants for q, A,S,B,V,HMD and Da. Statistical analysis is carried out between ripple factor, bottom load discharge and various variables by using multiple regressions and Micro cal Origin 7.5 non-linear square fitter has been used to obtain the base-fit curve. . 5. DATA ANALYSIS This paper analyzes 15-year field data from the Sarangkheda gauging station on the Tapi River. Step: 1 Daily download data is converted to monthly data. Step: 2 Then, the monthly data is converted into seasonal data by taking the average of the obtained monthly data to convert it into seasonal data. that is, monsoonal, postmonsoonal, and premonsoonal seasons. Step: 3 Seasonal data is converted to annual data. Step: 4 The value of the ripple factor obtained using six approaches. Step: 5 After performing multiple regressions on these data, results are obtained for three seasons. Step: 6 The Origin software is used to develop a mathematical model to correlate the Reynolds Particle number and the ripple factor for each approach, average, and multiple regression 6 ripple factor. ANALYSIS OF RESULTS For almost all methods, the variance of the undulation factor calculated at a particular station is nearly uniform. In this study we try to develop simple equations that are better adapted to the river under consideration. Values ​​used based on the average of six different methods can be used without loss of precision. The pattern of variation of the hydraulic parameters or variables follows a particular path. Following this path, a multiple regression equation for the Ripple factor is developed. The variation of the hydraulic parameters is such that the ripple factor shows a very large variation during the monsoon. Looking at such a distribution, the best fit curve based on the method of least squares does not give satisfactory results or a very good value of the coefficient of determination on an annual basis. Therefore, we try to study the seasonal variation of the ripple factor. The ripple factor takes into account the effect of non-uniformity of the flow. This non-uniformity can be correlated with bed conditions, flow conditions, dynamic conditions, etc. and the relationship between the ripple factor and various non-dimensional parameters can be established for given conditions of sediment specific gravity, diameter, and sediment characteristics. However, it is very difficult to correlate all the parameters that affect the ripple factor. But depending on the local conditions, the correlation between the undulation factor and the variables that affect sediment transport can be made. The Origin software is used to develop a mathematical model to correlate the Reynolds number of particles and the ripple factor for each approach, average and multiple regression ripple factor. Table 1.2 represents the mathematical models developed. Statistical analysis of the curve plotted between the ripple factor and the Reynolds number of particles is performed using a non-linear square fitter to obtain the best-fit curve. From Figure 4, which plots Reynolds particle number versus ripple factor, it can be seen that the pattern of variation is the same for all methods except Meyer Peter and Kalinske s. The comparison of the six methods shows that the Meyer Peter and Kalinske equations deviate the most. The pattern of variation for the multiple regression and averaged Ripple factor curves appears similar. Comparison of the six methods with the corrected average ripple factor curve and multiple regression curves shows that the Meyer Peter ripple factor and the Kalinske ripple factor give more deviation. An extremely large deviation is observed in the case of the Kalinske ripple factor curve.

105Principal Scientific Approach Characteristics of the Concept 1 Duboy's Excessive Empirical Shear 2 Shield's Dimensionless Considerations Excessive Shear 3 Meyer-Peter Muller's Empirical Excessive Shear 4 Einstein Brown's Semitheoric Fall Velocity Criterion 5 Kalinske's Modern Theory of Semitheoric Turbulence 6 Semitheoretical concept of Bagnold current power (1) The effect of bed shapes is not considered (2) The bed charge moves in series of parallel layers and the velocity of the lowest layer =0 (3) Assume a linear variation of velocity (4) For critical conditions, the entire bed moves as a single layer. (5) Developed for uniform material with different sediment densities. (1) Based on sediment site ranging from 1.56 mm to 2.47 mm and sp. Severity ranging from 1.6 to 4.2 (2) Variation of results up to 200% (1) Effect of bed shape is considered (2) Total shear stress is partially used to overcome the firm resistance of undulation (3) Bed load transport is a function of shear stress due to grains (4) The slope of the channel is divided into two parts (a) S' (slope required to overcome the resistance of the grains) ( b) S" (slope required to overcome the resistance of bed regularities) (5) Values ​​of the constants are different for uniform and non-uniform material (6) Used for rivers carrying thick bed material (7) The results of This equation almost coincides with Einstein's equation, which is complicated (1) It does not contain an explicit correction for shear stress, using the velocity of falling particles, but the effects of temperature are taken into the kinematic viscosity (1) The charge of the transported bed is related to the characteristics of the turbulent flow and the uniform materials of the bed (2) A Gaussian distribution is assumed for the velocity of the flow near the bed (1) The concept of dispersion of solid particles under shear is used (2) Total shear stress = Shear stress at the limits of shear stress due to particle collision (3) The shear stress due to particle collision depends on the normal force on the particles and the angle of internal friction. 7. DISCUSSION OF THE RESULTS The following findings can be summarized from the previous study. 1. The extremely large deviation observed in the case of the Kalinske approach, since his formula considers a uniform bed material which is not possible in the case of river flow. 2. The pattern of variation of the Reynold Particle number with the Ripple factor is the same for all methods except Meyer Peter and Kalinske s. 3. The variation of the ripple factor with Reynolds number for the multiple regression and averaged ripple factor curves appear similar. 8. REFERENCE [1] Chow V.T., Open channel Hydraulics, Mc Graw Hill Book Co, [2] Einstein H.A. and FM Abdel-Aal, Einstein's bed load function at high sedimentation rates, JHD, prose. ASCE, vol. 98, No. Hy-1, Jan [3] Garde R.J. &KG. Ranga Raju, Resistance Relations for Alluvial Channel Flow, HJE, Proc. ASCE, vol. 92, No. Hy-July 4 [4] Graf W.H. 1971, Hydraulics of Sediment Transport, Chapter II, Mechanics of Bed Form, Megraw Hill Book Company. [5] Simons D.B. & Richardson, D.V., 1971, Flow in alluvial channel, River Mechanics edited by H.W. Shen, Vol. 1, Fort Colins, Colorado. [6] Meyer-Peter, E & R Muller, Formulas for Bed load Transport, Proc. IAHR, 2nd Congress, Stockholm, [7] Vanoni V.A. & G.N.Nomicos, Resistance Properties of Sediment-Laden Currents, Trans.ASCE, vol. 125,

106Table.1.2 Mathematical models for particles Reynolds number Tapi-Sarangkheda-Monsoon season Mr.No NAME OF SCIENTIST MODEL EQUATION a b c d e Chi^2 R^2 E.BROWN Asymptotic1 y = a-b*c^x E Allometric1 of DUBOY y = a *x^ b SHIELD'S MEYER BAGNOLD'S Cubic Poly4 Cubic y = a + b*x + c*x^2 + d*x^3 y = a + b*x + c*x^2 + d*x^3 + e *x^ 4 y = a + b*x + c*x^2 + d*x^ E E E E E E KALINSKE allometric1 y = a*x^b AVERAGE IN MULTIPLE Poly4 Poly4 y = a + b*x + c*x^ 2 + d* x^3 + e*x^4 y = a + b*x + c*x^2 + d*x^3 + e*x^ E E E E

107Fig. 3 Savkheda, Sarangkheda and Ukai dam gauging stations. Fig.2 Tapi Basin. TAPTI SARANGKHEDA MONZOON RIPPLE FACTOR E. BROW N DUBOY'S SHIELD'S MEYERGE'S BAGNOLD'S KALINSKE'S AVERAGE MULTIPLE PARTICLE KING. NO. Figure 4 Particle Reynolds Number vs. Ripple Factor 71

108Non-invasive method for pre-hospitalization treatment of myocardial infarction patients Syed Ammar Zaidi (1), Marcin Marzencki (2), Carlo Menon (1) and Bozena Kaminska (2) (1) MENRVA Group, Simon Fraser University, Burnaby, BC Tel.: , Fax: , (2) CIBER lab, Simon Fraser University, Burnaby, BC Tel.: , Fax: , Abstract We propose a novel method for the treatment of myocardial infarction using low-frequency timed diastolic vibrations. It can be applied quickly after the onset of symptoms by non-specialized personnel, drastically improving the patient's chances of survival. The method is based on applying low-frequency mechanical vibrations in synchronization with the patient's cardiac cycle to facilitate the interruption and elimination of acute coronary thrombosis. We present an analysis of the proposed methodology and provide the experimental results obtained with a first prototype of a diastolic timed vibrator. We show that vibrations of the required frequency can be successfully synchronized with an ECG signal in real time. I. INTRODUCTION Heart disease is the leading cause of death in the United States, with a higher mortality rate than cancer (malignant neoplasms) [1]. More than 7 million men and 6 million women live with some type of coronary disease. More than a million people suffer a coronary attack (new or recurrent) each year, and about 40% of them die as a result of the attack [2]. This means that approximately every 65 seconds, an American dies from a coronary event. Myocardial infarction (MI) or heart attack is usually caused by a blood clot, also known as a thrombus, in the arterial vasculature surrounding the heart. MI refers to myocardial cell death and occurs due to complete coronary obstruction resulting in profound impairment of blood flow leading to inadequate oxygen supply to the heart muscle. Once such a clog begins, cell death can occur in as little as 20 minutes. Complete death of all myocardial cells at risk can occur in at least 2 to 4 hours [3]. Various methods have been developed to treat thrombus before MI occurs. Techniques range from surgical procedures, such as coronary artery bypass grafting, to minimally invasive procedures, such as angioplasty, atherectomy, thrombectomy, and intra-arterial thrombolysis [4]. Procedures such as angioplasty involve pressing the thrombus against the vessel walls with a balloon catheter or pulling it out of the vessel. Alternative invasive procedures, such as intra-arterial thrombolysis, involve the direct insertion of thrombolytic agents, ie tissue plasminogen activator (TPA), into the artery through a process known as catheterization. These agents are capable of dissolving the thrombus and are inserted after the location of the thrombus has been determined by another catheterization process known as coronary angiography [5]. Other methods include exposing blood clots to low frequency continuous wave ultrasound. In these methods, known as ultrasound-induced clot dissolution, treatment depends on the intensity and duration of the ultrasound [6]. Preferred invasive methods, such as angioplasty, require significant setup time and resources to be used successfully for treatment. Before the treatment method itself can begin, patients must undergo a series of clinical diagnostic tests including, but not limited to, electrocardiogram, blood tests, coronary catheterization, and the like [3]. By the way, the most effective treatment occurs during the first 60 minutes of symptoms known as the golden hour. However, by the time the average patient arrives at the hospital (approximately 2.7 hours after symptom onset), most deaths have already occurred [5]. This is compounded by the fact that those who manage to survive this deadly period of the disease have to spend more time in hospital for tests or be transported to a cardiac catheterization lab before treatment can begin. As a result, speed of intervention is the most important factor in saving a patient's life and is the key to effective heart attack treatment. It has been suggested [7] that rather than being transported to regional revascularization centers, myocardial infarction patients should receive immediate care at their nearby hospitals or other facilities. Also, if treatment could start during transport to hospitals, it would play a key role in ensuring patient survival. In this article, we propose a method that could be safely applied by non-specialist personnel on site or during transport of the patient to the hospital. We believe that this method could drastically improve the survival rate of heart attack patients. It consists of applying low-frequency mechanical vibrations synchronized with the patient's cardiac cycle, preferably together with the injection of drugs to dissolve thrombi. Our article is organized as follows. After the introduction, we present our method in detail, including the review of the state of the art and the analysis of the underlying idea. Subsequently, we present the architecture of the proposed system and discuss various aspects of its implementation. In the next part, we describe a prototype system and the experimental results obtained. We close this article providing conclusions and proposals for future work. 72

109II. PROPOSED METHOD We present a novel, non-invasive method suitable for the treatment of myocardial infarction and other low coronary blood flow states in a human. It relies on the application of low-level vibrations to the chest area along with the application of clot-dissolving drugs. By vibrating during the diastolic period of the cardiac cycle (relaxation of the heart) it is expected that coronary flow will increase and thrombus dissolution will be achieved. Instead of continuous vibrations, timed diastolic vibrations should be performed to ensure that the heart is not interrupted during the systolic period of the cardiac cycle (contraction of the heart), which can have very negative effects, especially on a weakened heart. In this study, our first goal is to develop a vibratory system that is independently controlled by an ECG signal in real time. The activation of the vibration system must be synchronized with the ECG signal in such a way that it remains vibrating in diastole and all vibrations cease in the systole of the ECG signal. Our goal is to create a device for use in the field: a diastolic timed vibrator (DTV) to be used as an emergency medical system to remedy acute states of low coronary blood flow, such as those present in angina (chest discomfort). secondary to narrowing of the coronary artery). ) or heart attack (an acute blockage of a coronary artery, usually by a blood clot). The DTV will impose mechanical vibrations on the patient's chest to improve coronary blood flow. Our objective is to create an economical and portable system that requires a minimum intervention of specialized personnel. A. Diastolic mechanical vibrations There is strong experimental evidence that diastolic mechanical vibrations on the chest wall increase human coronary blood flow (CBF). In previous studies, diastolic vibrations performed in patients with coronary artery disease (CAD) and in normal subjects resulted in an immediate increase in CBF as measured by both transesophageal Doppler and coronary flow lead. The increase in CBF in CAD patients was significantly greater than that in normal subjects [8]. Furthermore, clinical studies conducted in humans and canines have shown that external diastolic vibrations can release incomplete relaxation (IR) and improve systolic function of the heart [9], [10]. Similar studies consisting of external vibrations performed in human patients with aortic regurgitation (AR) and ischemic heart disease (IHD) resulted in a decrease in left ventricular systolic pressure; demonstrating that vibration-induced depression does occur in humans [11]. Clinical studies have shown that diastolic mechanical vibrations timed around 50 Hz improve coronary blood flow and left ventricle (heart muscle) performance in human volunteers, with and without coronary artery disease [8], [9]. Low-frequency vibration is a known potent vasodilator, especially for arteries with some degree of active tension or spasm [12], which is often the case in heart attack [13], and has also been shown to significantly improve blood pressure. clot dissolution with or without a thrombolytic agent both in vitro and in commercially available catheter systems [4]. Low-frequency external tapping has also been documented to lead to reliable and immediate clearance of acute coronary thrombosis in animal models, presumably by ameliorating clot nonadherence from a narrowed intraluminal surface [14]. We suggest that the efficacy of interrupting and removing thrombosis could be maximized by providing vibrations at different frequencies (by frequency sweep or random frequency variation), as this would facilitate the breaking of the different chemical bonds in the clot and add turbulence in the system. vascular. It would improve the mixing of the clot dissolving agent and increase the erosion of the clot surface. Vibrations in the 40-60 Hz range fall within the cardiac muscle resonance frequency spectrum [15], which would ensure a maximized therapeutic effect. B. ECG Synchronization Our method provides a new technique for disrupting and removing thrombus present in a patient's arterial vasculature surrounding the heart. During systole, the heart contracts and the pressure needed to push blood is generated within the chambers of the heart. As a result, vibration, which can interfere with the contractile process of the heart, cannot be performed on the heart during this phase and should only be performed when the heart is in the relaxation-diastole phase [11]. In addition, vibrations uniquely synchronized with the diastole of the cardiac cycle have been shown in clinical studies to advantageously facilitate cardiac muscle relaxation and, paradoxically, improve the force of cardiac contractions and can therefore be used safely [10 ], [16]. We propose to develop a device that would apply mechanical vibrations to the chest to increase coronary perfusion, disrupt blood clots, and generally improve blood circulation. Therapy can be performed by a paramedic in an ambulance or by a trained person in a clinic or emergency room. C. System Architecture The proposed system is composed of four main parts: a vibrator, an accelerometer, an ECG system, and a LabView VI that contains control and signal processing. Fig. 1 presents a schematic of the system architecture. 1) Vibrator: We use a standard electromagnetic motor driven by a 50 Hz source to generate a rotary movement translated into linear movement of a plate. In addition, a variable damping stage is added to adjust the amplitude of the vibrations generated. In order to be able to generate vibrations only in the desired periods of the cardiac cycle, a fast electromagnetic relay 1 is inserted in the power supply line of the motor. The relay is controlled from a DAQ 2 connected to the LabView interface. 2) Accelerometer: A MEMS 3 accelerometer has been integrated into the vibrating plate to provide feedback on the generated vibration amplitude. All three axes can be monitored for added reliability. The accelerometer signal is digitized by the DAQ and sent to the LabView interface for further processing. 1 Panasonic APE A NI9205 with analog inputs and one USB6008 with analog outputs are used in our system. 3 ST Microelectronics LIS3L02AL 73

110LabView Frequency generator vibrator pulse generator relay Fig. 1. Real-time ECG signal R-wave detection feedback accelerometer patient chest data processing engine DAQ ECG ECG data filtering ECG electrodes ECG system block diagram proposed diastolic vibration. Noise amplification and elimination Heart rate calculation R-wave detection Counter length calculation Counter control and pulse generation 3) ECG: A Burdick EK10 ECG acquisition system is used to amplify and filter the ECG signal human. The resulting signal is digitized by the DAQ and then processed in LabView. 4) LabView VI: An algorithm has been developed to detect systole and diastole in an ECG signal in real time to enable timed diastolic vibration. We created a virtual instrument using National Instruments LabView. As shown in Fig. 2, a real-time ECG signal is first filtered to remove noise. We use a low pass filter to remove unwanted high frequency components. Subsequently, it is high-pass filtered to detect the QRS complex. The resulting data is then used for heart rate calculation and detection of the R peak in the QRS complex. The QRS complex is used to calculate the heart rate by finding the ratio between the number of QRS complexes over the time elapsed during a specified interval. After determining the heart rate and therefore the period of the ECG signal, the lengths of two counters are calculated to operate the vibratory system. These counters, in coordination with R-peak detection, are time controlled to stop the vibratory system during systole and enable it during diastole. After detecting the R peak, the systole counter is reset disabling the vibration system until the systole cycle is complete. Once the systolic counter reaches its limit (which is set to coincide with the completion of the systolic cycle), the vibration system is re-enabled for a duration determined by the diastolic counter. Consequently, the diastolic counter adjusts to reach its limit before the start of the systolic cycle. In case the two counters overlap due to an incorrect calculation of the counter length, the systolic counter takes precedence over the diastolic counter; thus ensuring that any detection of R peak would disable the vibratory system. The duration of the systolic counter was approximated based on QT interval calculations made during previous clinical studies of patients with heart disease. Based on the data collected in these studies, a heart rate of 30 BPM (2-second period) would have a QT interval of approximately 0.5 seconds. As a result, the systole counter was set to 1/4 of the period duration [17]. This estimate was used during the initial testing phase and will be replaced by a more efficient regression-based algorithm. The resulting system ensures that the vibrations stop before the QRS complex begins. The counters are updated in real time to adapt to different heart rate values. 3rd EXPERIMENTAL RESULTS To verify the accuracy of our predictions about the effectiveness of timed diastolic vibrations, we started with the construction of a model system. We focus on the proper synchronization of the mechanical vibrations with the ECG signal. To determine if an external vibration system can be controlled by an ECG signal, we used a 5V DC geared motor as the vibrator and the ECG signal was generated by a Fluke PS420 Multi-Parameter Patient Simulator. The 50 Hz vibration was generated simply by placing an eccentric weight on the DC motor shaft and adjusting the drive voltage. The ECG signal was maintained at a heart rate of 30 BPM. Figure 3 shows the generated mechanical vibrations synchronized with an ECG signal in real time. The total period of the heartbeat is 2 seconds in which the PQRST region of the ECG lasts 0.55 ± 0.05 s. Although the systole (QRST region) only lasts 0.41 ± 0.04 seconds, the vibration system is turned off before the PQRST region begins to ensure that the vibrations occur only when the heart is in its relaxed state. During the diastolic cycle, the DC motor is allowed to vibrate for 1.40 ± 0.04 seconds. There was some overlap of vibrations at the outer edges of the P and T waves of the ECG signal; however, it was ensured that vibrations never occurred during the critical QT interval. After successfully synchronizing a DC motor with a real-time ECG signal, a commercially available Human Touch HT-1280 massager device (having a frequency and stroke amplitude suitable for clinical use) was tested as a vibrator. . Fig. 4 shows the modified massager with MEMS accelerometer mounted on the vibrating plate. In this configuration, a relay was placed between the power supply and the massager, 74

111Proceedings of the 3rd International Multi-Conference on Engineering and Technological Innovation (IMETI 2010) 1 2 Acceleration ECG 5 2 Acceleration ECG Conduction signal ECG [mv] 1 Acceleration [g] 0 ECG [mv] Acceleration [g] Time [s] accelerometer feedback Time [ms] Fig. 3. Experimental results for mechanical vibrations (represented by the acceleration amplitude) synchronized with the diastolic period of the cardiac cycle (represented by an ECG signal at 30 BPM). Translation Rotating motor shaft 50 Motor control Fig. 5. Experimental results of the vibration signal generated by the modified Human Touch HT-1280 massager synchronized with an ECG signal at 30 BPM showing the delay between the termination of the vibration signal. activation and actual termination of vibrations. Connection cable to LabView DAQ vibrations, all vibrations stop at the beginning of the QRS complex. The delay was measured to be approximately 17 ms between the end of the driving pulse and the actual termination of the mechanical vibrations. The delay caused the vibrations to overlap with the beginning of the PR interval and the last part of the T wave. For the most part, the QT interval was free of vibrations and with a more precise damping system and ECG processing algorithm. , the delay will also be removed. The approximate counter-based algorithm will be replaced by a more concise regression-based algorithm; thus ensuring accurate QT interval calculations and precise activation of vibrations. Vibrating plate IV. C ONCLUSION The presented diastolic timed vibratory system is a novel and innovative method for the rapid treatment of strokes and other cases of low blood flow. It has been shown through clinical studies that mechanical vibrations are essential to increase coronary blood flow, in addition to helping to improve the systolic function of the heart. In case of infarction, mechanical vibrations together with the application of thrombolytic agents can improve clot dissolution and therefore increase the patient's chances of survival. We present the first prototype of a diastolic timed vibrator powered by a LabView VI and synchronized with a commercial ECG system. We first demonstrate successful ECG signal synchronization using a simple DC motor. Subsequently, we modified a massager device and also successfully verified its functionality as a diastolic timed vibrator. The algorithm used in the Labview VI control was based on the activation of counters based on the Human Touch HT-1280 massager with modified motor control and a feedback MEMS accelerometer mounted on the vibrating plate. allowing the triggering of the massager through the Labview VI. This device has a much higher rotational inertia, so a special damping system had to be introduced to allow more precise synchronization of the generated vibrations. Figure 5 shows the experimental results obtained with the vibratory system synchronized with the ECG signal. The activation signal is used to indicate when the vibrations are activated according to the counter algorithm described above in the system architecture section. It can be seen that although there is a delay between the deactivation of the start signal and the actual stop of the 75

112duration of the systolic and diastolic cycles of an ECG signal in real time. The results showed that we can accurately synchronize mechanical vibrations with systolic and diastolic ECG cycles in real time. V. FUTURE WORK Our next goal is to use our test system in a live subject to determine the effectiveness of timed diastolic vibrations on a real thrombus. Initially, these tests will be performed on animal subjects. We also plan to build a diastolic timed vibrator with an integrated miniature ECG acquisition system and a microcontroller with timing algorithms implemented. It would allow us to have an autonomous device, suitable for clinical use. Another technical challenge will be to reduce the time delay between the vibration inactivation command and the actual stopping of the vibration, such as to allow vibration termination upon initial detection of an R-wave, to avoid systolic vibration of irregular rhythms. ACKNOWLEDGMENT This project was supported by the NRC-IRAP Pacific Region in collaboration with Ahof Biophysical Systems Inc. and Simon Fraser University. The authors wish to thank Andrew Hoffman for his suggestions and contributions to the project. REFERENCES [1] National Center for Health Statistics, Deaths and Percentage of Total Deaths by Top 10 Causes of Death: United States, [online]. Available: Leadingdeaths03 tables.pdf [2] American Heart Association. (2003) Heart attack and angina statistics. Consulted in December [online]. Available: [3] W. J. Kostuk, Coronary Artery Disease: Angina, Unstable Angina, Myocardial Infarction, October 2008, Discussion Paper prepared for the Workplace Safety and Insurance Appeals Tribunal. [4] M. A. Evans, D. M. Demarais, C. S. Eversull, and S. A. Leeflang, Clot Dissolving System and Methods, US Patent US 6,663,613 B1, December 16, [5] G. O. Turner and M. B. Rosin, Recognizing and Surviving Heart Attacks and Strokes: Lifesaving Tips you need now. University of Missouri Press, [6] M. Nedelmann, C. Brandt, F. Schneider, B. M. Eicke, O. Kempski, F. Krummenauer, and M. Dieterich, Ultrasound-induced blood clot dissolution without a thrombolytic drug is more effective with lower frequencies. Cardiovascular Diseases, March [7] M. Larkin, Speed ​​Is the Key to Effective Heart Attack Treatment, The Lancet, Vol. Vol. 355, p. 472, [8] N. Taihei, X. Yoshiro, T. Takehiko, H. Hideyuki, H. Nobuo, K. Hideichi, S. Kunio, K. Hiroshi, and C. Noriyoshi, Diastolic mechanical vibration in the chest wall increases human coronary blood flow, Japanese Circulation Journal, vol. 58(7), p. 476, [9] Y. Koiwa, H. Honda, T. Takagi, J. Kikuchi, N. Hoshi, and T. Takishima, Modifying Human Left Ventricular Relaxation by Small Amplitude Phase-Controlled Mechanical Vibration of the Chest Wall . Circulation, Vol. 95, no. 1, pp, Jan [10] Y. Koiwa, T. Takagi, J. Kikuchi, H. Honda, N. Hoshi, and T. Takishima, The improvement of depressed left ventricular systolic function by external vibration in diastole. Tohoku J Exp Med, vol. 159, no. 2, pp., Oct [11] Y. Koiwa, T. Ohyama, T. Takagi, J. Kikuchi, H. Honda, N. Hoshi, and T. Takishima, Clinical Demonstration of Vibration-Induced Depression of Left Ventricular Function . Tohoku J Exp Med, vol. 159, no. 3, pp, Nov [12] L. E. Lindblad, R. R. Lorenz, J. T. Shepherd, and P. M. Vanhoutte, Effect of vibration on a canine cutaneous artery. I am J Physiol, vol. 250, no. 3 Pt 2, p. H519 H523, Mar [13] P. B. Oliva and J. C. Breckinridge, Arteriographic evidence of coronary artery spasm in acute myocardial infarction. Circulation, Vol. 56, no. 3, pp., September [14] J. Folts, An in vivo model of experimental arterial stenosis, intimal damage, and periodic thrombosis. Circulation, Vol. 83, no. 6 Suppl, p. IV3 I14, Jun [15] Y. Koiwa, R. Hashiguchi, T. Ohyama, S. Isoyama, S. Satoh, H. Suzuki, and T. Takishima, Measurement of instantaneous viscoelastic properties by impedance-frequency curve of the ventricle. I am J Physiol, vol. 250, no. 4 Pt 2, pp. H672 H684, Apr [16] Y. Koiwa, T. Naya, T. Takagi, H. Hond, N. Hoshi, H. Kamada, K. Shirato, H. Kanai, and N. Cyubachi, Diastolic Mechanical Vibration in the Wall thoracic vein increases human coronary blood flow, Japanese Circulation Journal, vol. 58, no. 7, p. 476, [17] J. K. Alexander, M. I. Ferrer, R. M. Harvey, and A. Cournand, The q-t interval in chronic cor pulmonale. Circulation, Vol. 3, no. 5, pp., May

113Value Engineering for the Miane 400/230/63 KV Transformer Station in order to improve quality, optimize costs and project start-up time. Sara AMINAEE Managing Director - Best Solution Value Engineering Consulting Firm Toronto, Ontario, Canada and Seyed Ataollah RAZIEI Project Engineering Group, Pad Pay Sazeh Consulting Engineers Company. Tehran, Tehran, Iran ABSTRACT Value engineering is an identified valuable management procedure, which is used to implement and increase system output. It is a systematic way of making effective use of the budget allocated to projects and of identifying products and services. Considering the definition, to ensure the requirements of the area and the requirements of Miane Foolad Co, approval was obtained to establish a 400/230/63 KV substation. This project was assigned to the Azarbaijan Regional Electricity Company as D.B.F. Based on the estimated credit that was dedicated to this project, it was decided that, the bidding documents for the acquisition, installation and implementation of equipment and start-up in 2 ways that obtain financing from the bid winner and use loitering general process and on land, the bidding would be carried out. In order to implement the project more effectively, reduce costs and reduce project start-up, Azarbayjan Regional Electricity Co. proposed Tavanir Co. (Iran's power generation, transmission and distribution management company) to use value engineering processes and with the confirmation of the Value Engineering Committee, a value engineering workstation was established. In this document with a brief introduction of Value Engineering, while introducing the project and workstation of Value Engineering, we would review the process of this workstation in detail, pointing out the results of the workstation with a saving of 27% of the required project budget. 1-1-Definition 1-VALUE ENGINEERING According to the American Society of Value Engineering, it is a systematic way with specific techniques that identify the performance of products and services and defines the financial value for the production at lower cost and ensures the factors of risk and required quality. It can be said that value engineering is an organized effort to analyze the operation of systems, equipment, services, and institutions to encourage real operation at the lowest cost in the project start-up period with the relevant quality and appropriate measures. adequate security Value engineering In this work to study the executive process of value engineering, the method presented by the American Society of Value Engineering will be used. This is classified as follows: Pre-study workshop: this phase includes: 1- Recognition and compilation of information 2- Expansion of a cost model 3- Selection of team members. 4- Preparation of the study schedule 5- Preparation of the work schedule Workshop work plan: 77

114This phase includes six sub-phases: 1- Study and review phase 2- The function analysis phase: Its objective is to expand the division where the study plays an important rule 3- Creativity phase: The purpose of this phase is to expand the ideas for the base operation. 4-Evaluation phase: The objective of this phase is to evaluate the suggested forms and discard inappropriate ideas. 5- Development phase: The objective is to prepare the best option to improve the best idea from the previous phase. 6- Presentation phase: The objective is to reach an agreement and define responsibilities for the designers and the client of the project in accordance with the proposed form. The post-study phase: The purpose of this phase is to comply with the suggestions, which has been confirmed by Value Engineering. 2- SCOPE OF WORK OF THE PROJECT The establishment of the 400/230/63 KV Mianeh substation in accordance with the requirements of the power grid and providing the need of the Folad Mianeh company by the planning ministry for the purpose of exchanging energy Ardebil power plant and neighboring countries and the entire grid, solving the region's low voltage problem, increasing the stability and reliability of the grid, providing electricity for industrial customers, water pumps for agricultural purposes, reducing power outages reactive and providing the growth of the city s electricity and for commissioning in D.B.F has been assigned to the design and development assistant. After receiving the project assignment, the act of choosing the fourth factor began and the agreement was put into operation. When choosing the fourth factor, the land of the substation location near the Foolad company was selected. After the baseline study, the land with 320*350 was chosen and the relevant committee approved its purchase, and after legal formation, the land was handed over to the project contractor. Using the fourth factor, the project consultant was chosen and the contract was signed. The plan of the Mianeh 400/230/63 KV substation is as follows: The 400 KV busbar with the arrangement of 1.5 switches. Two 400 KV feeders for the overhead line of the Sahand and Shahid Ghayati power plant. Two 400 KV feeders to feed 35 MVA transformers. Two 400/230 KV autotransformers with a capacity of 315 MVA. 1 400 KV reactor with a capacity of 50 MVAR. A feeder for feeding a 400 KV reactor. Control room features were steel frame, Dech section slabs, stone floors, aluminum windows, 4mm glass, split heating and cooling system. The fulfillment of the assigned project should be done as D.B.F but, according to the consultant's conditions, the bidding documents were prepared in D.B.F and in cash. A tender for land enclosure and paving activities and another tender for purchase, installation, testing, commissioning of equipment and also construction activities were finalized. Due to the high price of the transformers, another tender was issued for the purchase of the transformers. 3- VALUE ENGINEERING WORK PLAN 3-1- Pre-study phase: The first Value Engineering meeting was held with representatives of the responsible units. In this workshop, the context and process of Value Engineering is presented, the history of the project by consultant and client, an introduction to the base plan, an introduction to the information phase, determination of the limits and the limitation of the project, determination of the The limitations of Value Engineering and the project requirements (sacred cows) have been discussed and analyzed, determining the value standards and description of the base plan of the project by the consultant and the execution of the aerial network of 400 KV. In addition, at that meeting kick off the workshop schedule has been confirmed. The research team included the executives, owners and the corresponding unit that is made up of: 1- Executives: The planning and development assistant, the airline executive, the substation executive and the building executive. 2-Beneficiaries: The operations assistant and the transmission technical office. It goes without saying that the members were indirectly chosen by correspondence of the Value Engineering committee with the corresponding unit and introducing the representatives by the unit heads and relevant companies of the workshop. In that meeting, the context and principles of non-ocupative Value Engineering were presented to focus on value engineering and free themselves from their workplaces Workshop phase: 78

115The main study phase is considered as the main phase of value engineering and the whole process of problem recognition, decision making and decision making is done which will be referred to later in this phase Review and study of the Information The scarcity of information and the use of incomplete or incorrect information are the main causes of the drop in the value of the index, which is why Value Engineering analyzes it in order to have more useful and better information. As a result of this, the quality of Value Engineering would increase. The next step in the information phase is to define the objective of the project, the objective of the work station, the limits of the studies, the existing limitations, the rules for choosing and recognizing the beneficiaries are the following: The plans have as Objective: The exchange of the energy produced by the Ardebil power plant, neighboring countries and the entire electrical network. Solve the low voltage problem Increase the stability and reliability of the grid Provide electricity for industrial customers Provide electricity for the agricultural water sector Reduce reactive power outages Provide the growth of electricity in the urban sector The objectives of the workstation : Increase the quality and improvement of the plan Reduce costs Reduce the start-up period Limitations: The size of the land according to the location and natural resources was unalterable. The limitation of the estimated budget The limitation of the start-up period of the project Beneficiaries: The regional electric company of Azarbayjan Electric distribution company The beneficiary's staff, repair and maintenance of substations. Local industries and farmers Project consultant. Project contractor. Ministry of Natural Resources and Environment and other government agencies. The nearby residents of the substation The Value Engineering group, after analyzing in its entirety, defined the value standards that would be the following: Reliability of the substation The correct operation of the operating system The ease of manufacturing and construction The ease of maintenance and the reduction of repairs Ease of maneuvering Reduction of primary investment Analysis of functions (FAST) Analyze functions and draw the graph FAST 1 was defined as the core of value engineering and doing it completely and accurately has a great influence on the exploring innovation and analyzing the cost of changing new ideas. The Value Engineering core after doing the different aspects of the project, based on these different sides of the project defined the operation of each part. In this phase, once again, an exploration was carried out on the project and the actions to be managed and finally the operations were defined in schedule number 1 (as attached). The drawing of the FAST diagram should be done after preparing the detailed schedule of the operation system. Two ways of drawing the FAST diagram are defined, the first one is a standard drawing in classic way, and the other one is drawn with according to customers. As seen in picture number one, the final purpose of the project is powering electricity, and the main operation of the Mianeh substation was recognized as an improvement in the 400 KV grid connection, feeding the 230 KV Azarbayjan substation. and feeding the local 63 KV substations. (See diagram number 1 as attached) The focal points of the group have been defined according to two bases. First, the expensive operations in the FAST diagram and second, the operations that have a wide potential for change. As a result of the creativity field of the workstation for the idea creation phase, the following was defined: Power transformers 400 KV Reactor 230 KV Switchgear 400 KV Switchgear 63 KV Switchgear The operations Creativity After analyzing the operation, the shop was ready to come up with ideas. At the beginning all the members of the team were prepared, explanations were given about the parts of the project, the operation and the preferences. The members of the workstation were then asked to brainstorm their ideas for further operation, taking into account the priorities, the ideas developed and wrote them down in their documents. This phase lasted 120 minutes and some 79 ideas were finally finalized. After 1 function analysis system technique 79

116analyzing the ideas that were not relevant, the repeated ideas were omitted and some ideas were merged and finally 24 ideas were chosen to be analyzed Development After identifying the ideas, the member analyzed the remaining ideas and rejected the ideas that were not technically required and also they were not relevant to the workstation. There were several cases that had been approached by the members of the group as follows: Connection of the 400 KV reactor to the overhead line using the switch disconnector Change from the 1.5 breaker arrangement to the complete 4 breaker ring system with the ability to extend to 1.5 Full switchgear system on the 400 KV side. To analyze the final ideas, the selection of ideas was chosen using the suggestions of the members and even the matrix revised for the second time and the priorities of each standard. The enforceability and form of execution of the changing possibilities in the layout or other cases in execution of any article. Detailed cost based on changing ideas Advantages and disadvantages of each option The effect of implementing each option on value standards and beneficiaries Finally, 7 ideas were chosen. The final ideas were ideas that the Value Engineering team had to analyze according to the standards defined in the information phase. The final ideas were the following: A: The option that would reduce the cost and increase the value quality standards and all the members of the group approved its execution. Omit the bypass switch on the 230 KV side Change the position of the 230/63 KV transformer to the highest position Provision for a longer extension for 5 400 KV overhead line feeders Omit a 230/63 KV transformer with power of 80 MVA with its 230 and 63 KV feeder Input and output of the 63 KV Foolad overhead line in the new substation and omission of one of the 230/63 KV transformers with 80 MVA and no input and output of the 63 KV overhead line KV due to the change in the route of the airline with fed to the company Foolad Mianeh Change of slope to asphalt the substation land in order to reduce the cost B: The option that would economize economically but at the same time would reduce the standards of value in some cases and the group did not reach a complete agreement, so the group presented a brief result to decide for the client The average opinion regarding the standards Finally the final idea and the analysis of the scored standards were delivered by the members and the complementary questionnaires to the members of the group and the performance of each one the standard was finalized. For each idea according to each standard, the variable was multiplied to arrive at a final variable. Schedule number 2 shows the scoring of each option according to value standards The extension phase Schedule number 3 shows the calculated costs of each option. 4- RESULTS According to the results of the work stations, the operations, especially in the project, were easy. Because the preferred options were operational at the same time, the group presented their plan which consisted of omitting the bypass switch on the 230 KV side, moving the position of the 230/63 KV transformer to the highest position, a forecast of greater extension for 5 400 KV overhead line feeders, omitting one of 230/63 KV, 80 MVA power transformer with its 230 and 63 KV feeder, Foolad 63 KV overhead line input and output in new substation and omitting one of the 230/63 KV transformers with 80 MVA and without input and output of 63 KV by overhead line for changing the route of the overhead line powered by the Foolad Mianeh company and changing the unevenness for paving the substation land to lower costs. Skip the reactor The plan to change the position of the reactor to the coil of the transformer 80

117References: [1] S.S.Iyer, Method of Using Value Engineering, Jebel Ameli, Farat Publishing, 2004 [2] Tavakoli Reza, Shekari Amir, Value Engineering as a Powerful Tool, Tadbir Magazine, Issue 132 [4] ARTHURE E MUDGE. "VLUE ENGINEERING" MCGRAW HIL1981 [5]ARTHURE E MUDGE. "VALUE ENGINEERING A SYSTEMATIC APPROACH" MCGRAW HIL1971 [3] Teri, Michel, Value Management, Mahab Ghods Publishing, 2005 S.No System Job Use Name 1 63KV Outgoing Feeder Output 2 63KV Input Feeder Feeder 63 KV busbar 3 Power transformer Breaker Voltage KV transformer feeder Feeder 230 KV busbar feeder KV output feeder Feeder 230 KV outputs Grounding and auxiliary transformer 6 Breaker Create electrical ground voltage KV input feeder Feeder 400 KV Bus KV Feeder Transformer Feeder 400 KV Transformer 9 Bus Connector Feeders 10 Neutral Reactance Compensator Reactive Load 11 Protection System Fault Diagnosis 12 Local Control System Local Control 13 Remote Control System Long Distance Control 14 DC System DC current feeder 15 AC system AC current feeder 16 Steel structures Spacing - 17 Insulator formation isolation 18 Potentialization area of ​​the ground system 19 Protection cables Lightning protection 20 Ground installation equipment 21 Foundation resistance weight 22 Building control AC, DC equipment protection 23 Administrative department Office staff 24 Lighting system Lighting area 25 Wall Physical protection Needs Schedule No.1 Work analysis 81

118Scoring of the options according to the value standards - The basic plan Omit the bypass switch, change the position of the transformer with provision for greater extension for 5 feeders Omit a transformer with its feeder, input and output of Canan and Foolad on lines of head Imperfect 1.5 Arrangement of switches on the 400 KV side Change of entry and exit angle of 230 KV on the head line Change of arrangement on the 400 KV side to double bar Reliability Cost reduction Ease of execution Ease of maintenance Profitability Ease of operation Ease of operation Environmental standards Possibility of expansion Skip the reactor Asphalt The comparison made based on the score of the indicators of value, lifetime expense, primary investment and final value indicator The base plan Skip the bypass switch, Change the position of the transformer Longer extension forecast for 5 feeders Option score according to standards Option cost (Lifetime Expense) Option cost (Primary investment) The option value indicator Skip a transformer with its feeder, input and exit of Canan and Foolad overhead lines Imperfect arrangement of the 1.5 switch on the 400 KV side Change of the angle of entry and exit of 230 KV on the header line Change of arrangement of the 400 KV side to double busbar Omission of the reactor Asphalting of land Annex No. 2- Scoring according to Value standards and Cost model of final ideas 82

119Standards and Technical Specifications Improvement of the connection of the Sahand plant to the network Improvement of the 230 KV network Supply of the output feeders Preparation of the ground Test equipment Scope of the lines Connection to the Sahand plant Connection to the Ghayeni substation Supply to Zobahan Substation Feeding to Taghi Dizaj Substation Feeding to Maineh 230KV Substation Feeding Zanjan 230KV Substation Feeding Power Transformer 63 KV Bus Section Equipment Control Feeding Circuits Fault Clearing Equipment Establishment Power Exchange Improving Grid Feeding 63 KV Substations How By Deployment of personnel Protection of panels Bar assembly with reactor Voltage conversion 400 KV transformer supply Voltage transformation 400 KV transformer supply 230 KV transformer supply 230 KV coupler Diagram No. 1 FAST diagram Mianeh 83 substation supply

120Evaluation of alternatives and choice of the optimized solution in a 400KV Tabas-Bafgh transmission line project as a good experience to eliminate unnecessary costs. Sara AMINAEE Managing Director - Best Solution Value Engineering Consulting Firm Toronto, Ontario, Canada and Seyed Ataollah RAZIEI Project Engineering Group, Pad Pay Sazeh Consulting Engineers Company. Tehran, Tehran, Iran ABSTRACT Value engineering is a systematic procedure with specific techniques, which identifies the product, services and creates financial value for them, so that the project is commissioned in the most profitable way and considering quality. In other words, we can say that value engineering is a well-organized method of evaluating the performance of system, instruments, services, and organizations to meet their requirements with lower cost factors throughout commissioning. of the project, consistent with the quality and safety measures of the project. During all these years since Miles' invention of value engineering, the above method is used in numerous countries to design and build various products and has been used in many projects to increase productivity and economize solutions for customers. This process is introduced and used in Iran for utility companies. The value engineering department of Tavanir Co. (Iran's power generation, transmission and distribution management company) has played an important role as the core of the implementation, planning and use of such method in the power sector. One of the value engineering workshops in this sector is the station used in a 400 KV Tabas Bafgh transmission line which is designed to reduce costs and increase effectiveness. 1- INTRODUCTION Value Engineering is presented as a valuable technical management tool that is used to improve the quality of systems. Value Engineering is a systematic way to optimize the consumption of the assigned budget of the project that recognizes the operation of products and services. According to the definition after approving the construction of a 400 KV overhead line from Tabas-Bafgh to connect the 400 KV network of Khorasan and the entire 400 KV network, improving the voltage quality of the Bafgh region, the stability and reliability of the entire grid, and also the transmission of the power produced from the Tabas coal-fired power plant to the entire Yazd regional electricity grid to improve the plan, reduce costs and the commissioning period of the project , the value engineering workshop was suggested for this project, the value engineering workstation was arranged. In this document, after briefly introducing value engineering, we will introduce the project and its workstation, then describe the processes and actions of the workstation, lastly, the result of the workstation will be noted. 2- VALUE ENGINEERING 2-1- Definition of the Value Engineering technique According to the American Society for Value Engineering, it is a systematic tool with specific techniques that identify the performance of products and services and defines the financial value for production at a lower cost and ensures the risk factors and the required quality. The Value Engineering technique is an organized effort to analyze the operation of systems, equipment, services, and institutions to encourage actual operation at the lowest cost in the project start-up period with pertinent quality and security measures. appropriate Value Engineering workshop work plan 84

121In this article, to study the executive process of value engineering, the form presented by the American Society for Value Engineering will be used. This is classified as follows: Pre-workshop phase: The pre-workshop phase includes: 1- Reconnaissance and information gathering 2-Create 3- Define the scope, purpose, shareholder cost model of the project 4- Select team members according to project requirements 5- Define criteria 6- Prepare study schedule 7- Define constraints and sacred cows 8- Specify project terminology 9- Prepare workshop schedule Main phase of the project study: This phase includes six sub-phases: 1- Review prior to the workshop 2- The technical phase of the function analysis system: Its objective is to expand the division where the study plays an important role 3- Creativity phase: The objective of this phase is to expand the ideas for the base operation. 4-Evaluation phase: The objective of this phase is to evaluate and filter inappropriate ideas. 5- Development phase: The objective is to prepare the best option to improve the best idea from the previous phase. 6- Presentation Phase: The purpose is to present and decide for the best value engineering team for the selected design as follows. The ultra study phase: The purpose of this phase is to comply with the suggestions, which has been confirmed by the Value Engineering team. 3- INTRODUCTION OF THE PROJECT According to the construction of the Tabas power plant and in order to transmit the energy produced from this power plant to the whole network and also to connect the 400 KV network of Khorasan and the 400 KV network of Yazd to Tabas -Bafgh 400 KV overhead line will be built. The mentioned overhead line with a length of about 295 KM was analyzed as the base plan (15% of the detailed design) in the Value Engineering workstation. The basic plan for the construction of the 400 KV overhead line is as follows: 1- Single copper conductor with CERLO double bundle 2- Two guard wires According to the consultant's suggestion, the project accessories were provided by the contractor. 4- VALUE ENGINEERING WORKSHOP SCHEDULE 4-1- Prior study phase: The prior value engineering workshop was held with representatives of the responsible units. At that workstation, the value engineering context and process, project history, base plan introduction, information phase introduction, determining project boundaries and constraints, determining the limitations of value engineering and the necessary of the project (Holy cows), determining the facilities and discussing within the value engineering team. Value standards and base plan project description have been designed and discussed within the value engineering team. In addition, at that meeting kick off the workshop schedule has been confirmed. Value Engineering Team: 1- Executives: The planning and development assistant, the airline executive, the substation executive and the building executive. 2-Stackholder: The operations assistant and the transmission technical office. It goes without saying that the members were indirectly chosen by correspondence of the Value Engineering committee with the corresponding unit and introducing the representatives by the unit heads and relevant companies of the workshop. In that meeting the context and principles of Value Engineering were presented nuncupative in order to concentrate on value engineering and free yourself from your workplaces Workshop phase: The main study phase is considered as the main phase of the workshop of Value Engineering and all the problem recognition, decision-making and decision-making process is carried out, which will be referred to later in this phase Review of the information gathering phase and introduction of the project base plan 85

122The scarcity of information and the use of incomplete or incorrect information are the main causes of the drop in the value of the index, which is why Value Engineering analyzes it to have more useful and better information. As a result of this, the quality of Value Engineering would increase. The next step of the information review phase is to define the objective of the project, the objective of the workshop, the existing limitations, the standards for the election and the recognition of the beneficiaries are the following: The objectives of the project: Network connection Khorasan 400 KV network and the entire 400 KV network. The improvement of the voltage quality of the Bafgh network. Increased stability and reliability of the network Transmission of the energy provided by the Tabas coal plant to the entire network. Project requirements (holy cows): Construction of a single 400 KV line on the main line with at least one double beam between Tabas and Bafgh. The limitations of the project: Deserted area of ​​some part of the upper line Coal mine near the Tabas power plant Transmission from near the Marisma salina power plant of some part of the upper line Protected areas Actor: The stakeholders of the project They are defined as follows: Yazd Regional Power Company Khorasan Regional Power Company Substation repair and maintenance staff Local farmers and industries project consultant The contractor to achieve the top line Ministry of National Resources and Environment Nearby residents of the top line Tavanir Company Value Engineering Group Core Network Management Company Value Engineering Goals: Reduce Cost Improvement Plan Value Engineering Limitations: Topline Management Project Construction Cost Tower and Topline Design After After analyzing it in its entirety, the value engineering group selected the value standards that would be the basis for the evaluation as follows: The physical reliability of the project Cost reduction Reduction of the start-up period of the project Ease of maintenance and repair Ease of equipment preparation Wear period Consideration of environmental hazards Analysis of functions System technique Analysis of functions and draw the diagram FAST 1 was defined as the core of the value engineering technique and do it completely and precise has a great influence on exploring creativity and analyzing the cost of changing new ideas. The different aspects of the design, based on these different sides of the project defined the functioning of each part. In this phase, once again, an exploration was carried out on the project and the actions that it has to manage and finally the operations were defined in schedule number 1. To prepare the analysis schedule of the operation there are several ways (such as that of the everything to detail or detail to whole and random) The classic form of detail to whole although time consuming would ensure to have all parts and operations on a FAST diagram. Thus, in this workshop the group used this route and presented the operation schedule that included the definition of main, secondary and support functions. The drawing of the FAST diagram should be done after preparing the detailed schedule of the operation system. There are many techniques for the FAST diagram as shown in diagram number 1. The value engineering team has extended the FAST diagram to define the parent, child and support diagram in order to specify the highest cost operation in the FAST diagram. (Diagram 1 as attached). Therefore, the creativity The phase of the workshop for the creation of ideas was defined as follows: Provide the towers and their connection. Provide accessories. Provide the phase wire. Provide the protection phase.

123Provide OPGW wire Creativity The creativity phase is the most enjoyable stage in the value engineering workshop. In this phase, according to the FAST diagram that was derived from the previous phase, the potential project point has been assigned. The group as a whole focused on the areas that had the greatest potential for increasing value, thinking and creating new ideas. Emphasis was placed in this phase on not making any evaluations or criticizing the feasibility, effectiveness or advantages and disadvantages of each idea and the group should propose any new ideas independently avoiding any delusion or self-censorship. The brain group storming technique applied in this phase. At the beginning all the members of the team were prepared, explanations were given about the parts of the project, the operation and the preferences. The workshop members were then asked to brainstorm their ideas for further operation, taking into account the priorities, the ideas developed and wrote them down in their documents. This phase lasted 240 minutes and in the end some 152 ideas were produced. After finishing the creativity phase, each idea was preliminarily evaluated. Each idea explained by its proponent in a brief and reviewed way: The creativity of the project Cost factors Profitability of the project Advantages and disadvantages of value standards. Customer agreement. After explaining each idea and discussing, the group has asked to express their opinion and reach an agreement. Finally, those ideas that have been filtered and selected for the evaluation phase were written in the Excel sheet. After evaluating 152 ideas, the opinion of the group was reviewed. Evaluation phase In this phase, those ideas that have been selected by the team were divided among the team for development. The group categorized and fully summarized the outcome of the workshop according to the development meeting which was announced in the form of progress. A) The options for the Tabas-Bafgh airline, which in terms of economizing savings and increasing value standards and operations, were fully agreed. 1) Cross from near the power station. 2) Reduce the length by crossing the salty lands. 3) Use of hanging glass insulators (fog type) 4) Use of 7NO8 ground wire B) Options that were economized but lowered value standards in some situations and the group did not fully agree, so they send a short report to the owner for their comments. 1) Using the Squap driver 2) Using the hulter towers C) Options that had some advantages but due to reduced value standards were rejected, even though they had some value on other projects. 1) Use of different types of hulter towers 2) Use of different towers according to different angles 3) Reduction of foundation volume For the final analysis, the members reviewed the selected standards by using the uniform matrix and the priority of each was selected standard. Finally, the final idea and the analysis standards scored by the members and the complementary questionnaires were delivered to the group members, and the performance of each standard was finalized. For each idea according to each standard, the variable was multiplied to arrive at a final variable. Schedule number 2 shows the score of each option according to the value standards. Development During a two-week break at the workstation, the group discussed their own ideas and went over the pros and cons and their costs. Different groups provided their progress reports, including pros and cons, and economized according to the Value Engineering system. They then discussed each idea in detail. 87

124References: The Extension Phase Schedule number 3 shows the calculated costs of each option. 5- RESULTS According to the result of the work station, the operation of having a special shape in the project was easy. Since the preferred options were operational at the same time, the group presented their plan which was to pass close to the plant, reduce the length of the line when passing through the salty lands, use hulty glass and FOG type insulators and reduce the use . of accessories, and also using the 7NO8 system. The plan to change the cable from cerlo to squap and the use of hull towers was presented to the client due to a failure to reach an agreement by the group in general. [1] S.S.Iyer, Value Engineering Use Method, Jebel Ameli, Farat Publishing, 2004 [2] Tavakoli Reza, Shekari Amir, Value Engineering as a Powerful Tool, Tadbir Magazine, Issue 132 [3] Teri, Michel, Management value, Mahab Ghods Publishing, 2005 [4] ARTHURE E MUDGE."VLUE ENGINEERING" MCGRAW HIL1981 [5]ARTHURE E MUDGE."VLUE ENGINEERING A SYSTEMATIC APPROACH" MCGRAW HIL191 Components The main operation The course of the line of support operations Creation of the foundations Access to the upper part line Create privacy - - Foundation tower The conductor cable The protection cable Insulator Accessories The grounding system Transfer of the charge to ground Transfer of the charge to the foundations Transfer of electrical charge Transfer of lightning energy Creation of electrical insulator Tolerance of mechanical loads Transfer of lightning energy to ground Maintenance of wiring Information transfer Tolerance of short-circuit current Tolerance of mechanical loads Tolerance of electrical charges Transfer of the short-circuit current to earth Maintenance of the other components Tolerance of mechanical loads Alarm Maintain the allowed space Creation of casualties Tolerance of mechanical loads Conduction of lightning energy Annex No.1 Work analysis 88

125Scoring of the options according to the value standards The base plane Use of different types of pylons and base plane (1) Use of the towers with special angles according to the existing angles Pass from near the power plant Reduce the length of the line (Pass pulling the salty lands as possible ) The scope driver with three beams The glass, fog, hang insulator reducing the accessories as possible reliability Cost reduction Reduction of the project start-up period The ease in maintenance Profitability Ease Equipment life time Environmental rules Reduction of foundation volume in hard ground Use of NO87 cable Schedule No2-Score according to value standards The comparison made based on the score in value indicators, lifetime expenses, Primary Investment and Ultimate Value Indicator Scoring the Option by Standards Cost of Options (Life Expense) Cost of Options (Primary Investment) The Value Indicator of Options The Base Plan Using Different Types of Halter Towers and base plan (1) Using the pylons with special angles according to the existing angles Passing from near the power plant Reducing the length of the line (Passing by pulling the salty lands as possible) The scope driver with three beams Glass, tarnished, hang the insulator by reducing the accessories as possible Reducing the volume of the base on hard ground Using the NO87 cable Passing from close to the power plant, reducing the length of the line passing through the salty lands, using insulator of hulty glass and FOG type and reducing the use of accessories, and also using the 7NO8 system 413.49 423, ,80 418.17 415.93 413.63 423.85 423.15 395, Lifetime savings - 8,258- 12,319 5, , ,486 Investment reduction - 10,358-12,044 5,680 7,915 10, ,63 6 Schedule No3-Final ideas cost model 89

126How the design intends D.O - Voltage 400KV - Single line When O.T.F A.T.F Why L.O.F - Respect for standards - Ease of maintenance Installation of equipment Testing and commissioning Maintenance of other components Transfer of charge to ground Line maintenance Annual maintenance H.O.F Transfer of energy C.P Transfer of data information Installation of conductor cable Installation of protection system Alarm Conduction of lightning energy Installation of OPGW Electrical load tolerance Installation of insulators and accessories Mechanical load tolerance Creation of electrical insulator Cable maintenance Maintain space allowance Tower installation Transferring lightning energy to earth Transferring electrical energy to earth Creating the foundation Transferring energy to earth Routing Creating privacy Accessing the line Visiting the location Navigation Review the aerial maps Diagram B.F No.1- Diagram FAST 90

127Use of data mining to optimize commercial and operational interests: a case study of Duke Energy Brasil Clayton Baltazar, Duke Energy, Av. Presidente Wilson, 118 Centro, Rio de Janeiro, Brazil and Tathiane Ribeiro, Duke Energy, Av. Presidente Wilson, 118 Centro, Rio de Janeiro, Brazil and André Machado Caldeira, DSc, IBMEC/RJ, Av. Presidente Wilson, 118 Centro, Rio de Janeiro, Brazil and Maria Augusta Soares Machado, DSc, IBMEC/RJ, Av. Presidente Wilson, 118 Centro, mmachado@ibmecrj.br, Rio de Janeiro, Brazil and Mihail Lermontov, PhD, UFF, Av. Presidente Wilson, 118 Centro, Río de Janeiro, Brazil and Rômulo Martins França, MSc, Av. Presidente Wilson, 118 Centro, Rio de Janeiro, Brazil and Thiago Drummond, MSc, IBMEC/RJ, Av. Presidente Wilson, 118 Centro, Rio de Janeiro, Brazil and Walter Gassenferth, MSc, IBMEC/RJ, Av. Presidente Wilson, 118 Centro, Rio de Janeiro, Brazil 91

128ABSTRACT This article presents partial results of a Data Mining Research Project for Duke Energy Brazil that is being developed by Ibmec-RJ. The data collected supports a series of tests and studies that are used in the day-to-day business of the company. A prototype of the data mining software that is being implemented to optimize Commercial and Operational interests is presented. Through the use of algorithms or search engines, this work tries to discover patterns, trends and inference rules in the data to improve the optimized decisions by the user. Future articles will present the modules of this tool using neural networks and cluster analysis with the Duke Energy Brazil database. Keywords: data mining, statistics, association rules, prototype, optimize interests 1. INTRODUCTION DukeEnergyBrasil today has a large data bank that covers both the operation and part of the trade. This information supports a series of tests and studies used in the day-to-day business of the company. But as these data were used and manipulated, Duke realized that there are variables that are difficult to predict, since there are inherent relationships between these and other variables, but they are unknown. The idea of ​​this research is to use data mining techniques to discover these previous relationships, and not only improve Duke's forecast and actual operation, but work to improve the development of trading and operating rules of the Brazilian electricity sector. Duke's knowledge database computational tools extraction system using operational and business data is the final product of this research project. Based on information aspects, the tool will assist in their decision-making processes. 2. METHODOLOGY Through the use of algorithms or search engines, try to discover patterns, trends and rules of inference in the data. With these rules or functions, the user can make optimized decisions. For this investigation, the load, generation, flow and precipitation files were prepared and validated. The validated data was processed in a database using Microsoft Access and several tables were created to be analyzed using association rules. The association rule extraction process is semi-automatic because it requires the participation of the user defining the data to be analyzed and the verification of the discovered knowledge, indicating whether it is useful or not. This process aims to extract from large databases, without any prior formulation of hypotheses or unknown data, valid and useful information for decision making. Support and Trust are the parameters used in this methodology and must be determined by the user to limit the number and importance of the extracted rules. The patterns described by the association rules are of the type: If XthenY (X==>Y) Support: support(x==>y)=p(x Y) proportion of cases that occurred in xandy (both). The support selects ALL possible rules of type X ==>Y, from the database. Confidence: confidence(x==>y)=p(y/x)=p(x Y/ P(X) proportion of cases that X occurred yy, divided by the proportion of times that x occurred Confidence selects among all the possible rules of type X ==> Y, those that occurred Y from X 92

129happened in the database. It can be considered interesting with the rules, for example, with support > 10% and confidence > 50%. Initial Screen 3. THE PROTOTYPE On this screen the user enters his username and password as shown below. In the log tab, the user has options to upload files, view association rules, and all reports: After entering the name and password, the following screen appears: In the file tab, the user is in control of Your access: When uploading, the user selects the data to analyze: Or exit the system: What they are: 93

130On the result, the user has the generated rules: Once the data is selected, we have: On the reports, the user has: On the rules, the user inserts the parameters to generate the association rules in the parameters tab: The user has the option of generate graphs by selecting "average hydroelectric energy generated and planned for months and generation unit", "average hydroelectric and thermal energy generated per month and per generation area" and "accumulated precipitation per month: In the rules the user generates the rules : When you click create, you have different types of bar charts: 94

131O in lines: The user can select, by area of ​​generation: O by generating unit: O in lines, three-dimensional: 4. CONCLUSIONS O in bars, three-dimensional: This work presented the prototype of the software for data mining that is being implemented in this project, and a future work will present the modules that use neural networks and cluster analysis with the Duke Energy Brazil database, with the aim of improving the quality of service to the Brazilian consumer. 95

1325. REFERENCES [1] Barbosa, Denise Chaves Carvalho, Gassenferth, Walter, Machado, Maria Augusta Soares. Data Mining as a Decision Tool for Materials Procurement, in: Satyendra Singh (Org.) Handbook of Business Practice and Growth in Emerging Markets. Winnipeg, Canada: World Scientific Pub Co Inc, 2009; [2]Cios K., Pedrycz W., Swiniarski R., Kurgan L., Data Mining: A Knowledge Discovery Approach, Springer, ISBN: , 2007.v [3] Krzyztof Koperski, Jiawei Han. Rule Discovery of Spatial Association in the Geographic Information Database, Proceedings of the 4th International Symposium. Advances, in Spatial Database, (SSD). Vol 951, Springer-Verlag, [4] Nabil Adam, Vijay Atluri, Songmei Yu and Yelena Yesha, Efficient Storage and Management of Environmental Information, 11th IEEE-NASA Mass Storage Conference, Maryland, [5] Nogueira, Edgard. Data Mining using SODAS: A Case Study, Dissertation of Mestrado em Administração Ibmec-RJ, 2006; [6] Machado, Maria Augusta Soares. Computational Intelligence. São Paulo: Thomson Publishing House, 2007; [7] Medeiros, Valéria Zuma, Machado, Maria Augusta Soares, CALDEIRA, André Machado, Pacheco, Giovanna Lamastra, Gassenferth, Walter. Quantitative methods with Excel. São Paulo: Cengage Learning Publishing House, 2008; [8] Raymond T.Ng, Detecting outliers from large datasets, Geographic Data Mining and Knowledge Discovery, Taylor & Francis,

133Affordability and level of assistance for the weaker economic section group: a case study from the city of Surat (India) Dr. Krupesh A Chauhan 1 1 CED Associate Professor, S.V. National Institute of Technology, Surat-Gujarat, India, Dr. N.C. Shah 2 2 Professor CED,SVNIT, Surat, Gujarat, India Dr. S.M.Yadav 3 3 Associate Professor CED, SVNIT, Surat-Gujarat, India, ABSTRACT Housing shortage in India is growing rapidly, mainly because the supply is much less than the demand for housing. In the urban area, the problem is more complex and complicated, since the pressure for housing and services is due to both natural increase and migration. The most important resources needed to buy House are finances. Housing plays an important role in a country's economy, typically accounting for 10 to 20 percent of total economic activity. This paper has conducted research on affordable housing in India. This document is a classic example of economically weaker affordable housing for a metropolitan city like Surat. Keywords: housing unit, housing, finances, affordability, weaker economic section. 1. INTRODUCTION Housing is usually a person's greatest asset. In the housing sector, the problem is further exacerbated by the mobility of families, both poor and rich, to spend enough on basic services, which also requires financing. The availability of housing finance is therefore crucial for overall economic development, as well as for a household's well-being and quality of life. [1] Any attempt to resolve the magnitude of the housing shortage has to address the issue of urban households owned by EWS & LIG. Up to 70% of the urban population belongs to these two income groups (according to the NCEAR study). And as mentioned above, of the total housing needs of 26.53 million units in urban areas for 2012, up to 97% will be required by EWS and LIG people. [2]. In the first period of the National Five-Year Plan ( ), housing was introduced into the policy framework at the national level. Affordability was emphasized as the key issue and government support through subsidies and loans was deemed necessary and is still continuing in the plan period 11 ( ). 2. STUDY AREA Surat is one of the fastest growing cities between the Mumbai and Ahmedabad corridor. The city of Surat is situated on the bank of the Tapi River and has the coast of the Arabian Sea to the west. Surat is the main center of business and commerce in the South, Surat Municipal Corporation (SMC) as shown in Fig.2.1. The area is about 334 square kilometers. On the housing front, despite the substantial increase in housing stocks in urban areas, the housing situation remains grim and made all the more complex by the diverse nature of its population and its geographic space. The mind-boggling forecast is that almost 36% of India's population will move to the urban center within the year [3]. This forecast comes into perspective when we realize that even at the present time, urban India has millions of units housing as relief in the draft 11 plan task force report [8] on urban housing which states that the housing requirement during the 11 plan period ( ) is estimated at millions of units, including housing shortages such as In More than 97% of the total housing shortages are for the EWS and LIG categories. Fig. 2.1: Surat Municipal Corporation Area 3. OBJECTIVES OF THE STUDY Study affordability Investigate the level of assistance for EWS in Surat Municipal Corporation Area. 97

1343.1 Structure of the Housing Financing Institutes Housing is a State issue. More states have established statewide housing boards and slum clearance/development boards. Prior to the first five-year plan[4], housing was mainly taken care of by the private sector and some budgetary allocations were made for housing for government employees. Figure 3.1 shows a schematic representation of the formal housing finance system. [9] Central Government. Government of the State Union Private savings National Housing Bank HUDCO State Housing Board, Local Entities, Public Companies Home Buyers: Individuals and Cooperatives Housing Cooperatives LIG, GIC, Commercial Banks HDFC Fig- 3.1 A network of Housing Financing System Formal new initiatives to address the problem of housing finance. The creation of various Housing Finance Corporations is one such step. The Financial Institution is mandated to allocate a substantial part of its resources to meet the needs of the poor. Affordable housing has three main aspects. a) financial, (Government and public sector, public-private association, financing through a compound credit mechanism, self-help groups and international capital raising); b) technical, (cost of material, cost of construction technology and cost of development of land and infrastructure); c) political, (reorientation of central, state and local bodies, strengthening of the construction center, new approach to construction management). [12] interviewed households had moved to their housing area in the 1990s. Taking that into account the corresponding norms established by plan 11 ( ) and HDFC in 2006 Table.5.1 were adopted for the affordability study. [8] 5. THE AFFORDABILITY STUDY To understand affordability it was analyzed using Wakely diagrams to indicate levels of affordability in EWS. 5.1 Entry Parameter Selection criteria for housing areas, those in which residents had been living for at least eight to ten years, were selected so that residents could become familiar with the housing areas. The analysis of the questionnaire showed that most of the interviewed households had moved to their living area in the 1990s. Taking that into account the corresponding norms established by plan 11 ( ) and HDFC in 2006 Table.5.1 were adopted. for the affordability study. [8] In the monogram shown in Fig.6.1, housing standards in terms of area are plotted on the Y-axis of the corresponding graph, while the related cost of capital is plotted on the X-axis. For the EWS category, the cost of capital as a multiple of annual household income is as shown in Fig.6.1. This means that the maximum capital cost of a house in this category could be only Rs. 55,836/- (39,600 x 1.41). At Rs.4841 per square meter of construction cost; this income category could afford a maximum of square feet, represented on the corresponding scale by point P. Point P is then projected on the cost of capital graph as a multiple of annual household income to meet the line corresponding to 1 .41 at point Q. Point Q is then projected horizontally onto the curve representing the percentage distribution of household income to meet point R. R is then projected onto the graph below, which shows the affordability percentage of household in income category. [6] Table: 5.1 Interest rates and loan repayment duration for different income categories. Category Max. Annual Income (Rs.) Interest Rate Loan Term Rental Income Percentage EWS 39,600/ % 15 years. 20 Source: plan 11, HDFC 5.2 Capital cost, households and house size The minimum construction cost according to 2007 prices was considered as follows: Housing EWS: Rs. 4841/square meter Household size was taken as 4.4 (NSS report No. 505, Jan-June 2004) [7] In the next section, the effectiveness of the level of assistance for EWS is analyzed using Wakely plots. 6. LEVEL OF SUPPORT FOR EWS. Wakely charts are mainly of two types. In the first type of Wakely charts, the cost of home capital can be determined as a multiple of the annual household income, given (i) The 98

135interest rate on the loan, (ii) the length of the loan, and (iii) the household's rental income. Wakely's second type of graphics is a combination of four types of graphics expressed in the form of a monogram. By using these charts in tandem, the percentage of households that can afford a home of a given standard can be determined. The parameters given are: (i) The standard of housing in terms of area, (ii) The cost of capital involved. (iii) The cost of capital as a multiple of the annual family income and (iv) The percentage distribution of the annual family income. For the EWS category affordability analysis, the home's cost of capital was first determined as a multiple of annual household income. The term of the loan is assumed to be 15 years at an effective interest rate of 5.68% based on the prevailing benefit. Family rental income is assumed to be 20%. As shown in the figure: no. 4 the point of intersection of an interest rate of 5.68% and the duration of the loan of 20 years at point O. This point is then projected horizontally to the left to find the corresponding 20% ​​of the family income dedicated to the rent to get point P, and the line is projected further to the left to meet the scale of annual repayments R. Point P is then dropped perpendicularly to get point Q on the cost of capital scale as a multiple of the annual family income. In this case C has a value Using the equation p = c x r. A value of p = x 20 can be obtained. This means that the maximum cost of a house that a person in this income category can afford is multiplied by their annual family income. [5] Fig:.6.1 Ability to afford housing Fig:6.2 Housing financing for EWS The affordability bar shows that around 80% of the EWS category could afford the bare minimum of square footage, as shown in the monogram Fig : 5. one is better than what is prescribed by the National Building Code. The maximum affordable size of 22.45 square meters, per the EWS, was considered by the authority of SMC. [10] A household size of 4.4 numbers. This means that there would be a very medium occupancy rate in the rooms, which would affect the level of satisfaction. 7. EWS HOUSING PROJECTS Surat Municipal Corporation has built 7616 housing units at 24 different sites during the year [11]. Construction work on 7,424 housing units (UD) has been completed and possession has been handed over to beneficiaries Units are handed over to beneficiaries by lottery Housing design for E.W.S. The category is Land plus three-story RCC framed structure. On each floor, there are four housing units. The total constructed area of ​​a single U.D. is m2 (ie square feet) The unit has individual living room, kitchen, toilet, laundry room and balcony. The cost of the land is not considered as part of the total cost of the project. 1,49,596 m2 of land has been covered for 23 sites. The average housing density is 453 DUs/Hector (ie, 2,265 PPH). The Surat Municipal Corporation also provides basic infrastructure such as water supply, drainage, pucca roads and public lighting. The construction cost of the individual housing unit was Rs.58,000/- for the project before 26 January, while for the post-earthquake projects, the building design and construction of the housing unit were revised. single dwelling increased to Rs.68,000/-. Rs. The Government provides a grant of 5,000 per U.D. The photograph of the project site is shown in Fig.

1368. LEVEL OF ASSISTANCE ACCORDING TO SMC COST CEILING The outstanding points such as income ceiling, built area, unit cost, govt. the subsidy and other details are shown in table-8.1 Fig. 7.1 Photograph of the project Table-8.1. The outstanding points in this level of assistance were the following: Regime Maximum income limit In Rs per month Area built per unit (m2) Unit cost Contributions Beneficiary Govt. Subsidy Loan component EWS Fee Up to 2,500 From To In case of award made by lottery 58,000 Before E.Q. 18,000 68,000 After E.Q. 28,000 5,000 35,000 In case of change of cabins 58,000 Before E.Q. 68,000 After E.Q. 1,000 5,000 1,000 5,000 35,000 17,000 SMC s interest-free loan 35,000 27,000 SMC s interest-free loan 332 up to 15 years 332 up to 15 years 94 up to 15 years 332 up to 15 years 150 up to 15 years The affordability bar shows that 14 % of EWS, as shown in Fig. 6, it could not afford a headquarter with SMC's support square footage. The total built-up area of ​​a single dwelling unit (DU) is Sq. meter. (ie square footage) The unit has a single living room, kitchen, bathroom, laundry room, and balcony. This shows that SMC's levels of assistance were adequate and that minimal housing was affordable for most people in EWS. The actual unit cost after the earthquake is 63,000/- due to the benefit of government subsidy and the total monthly installment is Rs 482/- as shown in Table 8.1, which can be easily saved for monthly income. Figure: 8.1. Home financing for SMC 100 maximum costs

1379. CONCLUSION AND DISCUSSION Low majority affordability often results in a leakage process in which housing does not reach the actual target groups and is often occupied by people of a higher income level. It is highlighted that with a better affordability, the level of satisfaction also rises, something that until now was overlooked. The main findings of the study revealed the following A minimum standard house is affordable for most EWS in the urban area of ​​Surat. Regarding the prevailing construction rate and the entire provision of infrastructure facilities in the EWS project, the SMC authority has provided the best level of assistance. A higher number of units are required to be built based on the demand-supply ratio in Surat which was observed during the invitation of the application form for the EWS Housing Unit Housing Scheme by the SMC authority. Based on the study, it seems that EWS is in a better position compared to LIG. People in the highest tier of the EWS are in a more favorable position than their immediate higher income categories, as they enjoy loans at lower interest rates and have the benefit of longer repayment periods. It found that across all levels of EWS assistance, affordability is significant at the level of need satisfaction to account for under the current set of conditions. The need for a broad and widespread institutional networking with an emphasis on the housing finance system. Institutional development in the housing finance sector has now taken on critical importance not only in the context of affordability, but also for better integration of the housing finance system with the macrofinance system. The emerging political intervention of the Indian government and the changing role of the government from provider to enabler has created a wealth of opportunities and challenges for key stakeholders. REFERENCES [1] Bertrand Renaud, Housing and Financial Institutions in Developing Countries an Overview, World Bank Staff Working Papers, Number 658. [2] Bhattacharya K.P. (1998), Affordable Housing and Infrastructure in India [3] Census Data (2001), Census of India, New Delhi Ministry of Home Affairs. [4] D. S. Sudhakar (2007), Affordable Housing and Profitable Technologies, Shelter, Vol-10, HUDCO-HSMI Publication. [5] Krupesh A Chauhan, Dr N C Shah, Maulik P Jariwala, (2008) A Study: An Overview of Housing Finance Mechanism and Housing Affordability LMIG and LIG: Surat Urban Area, First International Conference on Emerging Technologies and Applications in Engineering, Technology and Sciences, Volume 2, [6] Krupesh A Chauhan, Dr N C Shah, S M Yadav (2007) A case study on area based housing affordability vs. income for three nodes of urban area of ​​Surat (NSSCM-07) Soft computing methodology Ujjain Eng. College Ujjain (M.P.), Volume-1, 32. [7] National Housing and Habitat Policy (1998), Ministry of Urban Affairs and Employment Government of India, Nirman Bhavan, New Delhi. [8] Report on the 11th Five Year Plan (2007), Task Force on Urban Housing with an emphasis on slums, Ministry of Housing and Urban Poverty Alleviation, Government of India, New Delhi. [9] R. N. Sharma (1994), Indo-Swedish Perspectives on Affordable Housing, Tata Institutes of Social Sciences, Mumbai. [10] Surat City Development Plan ( ), Surat Municipal Corporation (SMC) and Surat Urban Development Authority (SUDA), Surat, [11] Surat Vision 2020; Surat Municipal Corporation (SMC), 1 May, [12] V. K. Dhar (2004), Housing Finance and the Urban Poor, Urban Housing: Issues and Development Strategies (Volume-1), UHIDES, Surat. * * * 101

138A generalized definition of Jacobian matrix for mechatronic systems Hermes GIBERTI, Simone CINQUEMANI Department of Mechanical Engineering, Politecnico di Milano, Campus Bovisa Sud, via La Masa 34, 20156, Milan, Italy Giovanni LEGNANI Dipartimento di Ingegneria Meccanica ed Industriale Università degli Studi di Brescia , via Branze 38, 25129, Brescia, Italy ABSTRACT The kinetostatic performances of the manipulator are generally investigated considering only the geometric structure of the robot, neglecting the effect of the drive system. In some circumstances, this approach can lead to errors and mistakes. This can happen if the actuators are not identical to each other or when the gear ratio used is not identical and/or not constant. The article presents the so-called Generalized Jacobian Matrix obtained by identifying an appropriate matrix, generally diagonal, defined to: 1. adequately weight the different contributions of speed and force of each actuator. 2. describe the possible inhomogeneous behavior of the drive system that depends on the configuration achieved by the robot. The theoretical analysis is supported by examples that highlight some of the most common mistakes made in evaluating the kinetostatic properties of a manipulator and how they can be avoided using the generalized Jacobian matrix. 1. INTRODUCTION The behavior of a serial or parallel manipulator can be investigated through its kinetostatic performances [1], such as repeatability, stiffness, maximum force or speed. They all depend on the kinematic structure of the system, its configuration in the workspace, and the type of drive system used to operate the robot. The key may have unique configurations in which performance in some directions is extremely poor while in others it is extremely good. Conversely, the manipulator may have configurations in which the performances are identical in all directions. This behavior can be described through the concept of isotropy [2], [3]. Naturally, the design of an isotropic machine is desirable because it ensures homogeneous performance in all directions in terms of precision, repeatability, rigidity, maximum force, and speed [4]. The kinetostatic properties of a manipulator, depending on its position in the workspace, can be analyzed using the Jacobian matrix (J) [5] or by manipulability ellipsoids, strictly related to the Jacobian matrix itself [1]. However, generally the isotropy evaluation is carried out under the assumption that the behavior of all actuators is independent of the robot position and is the same for all actuators. This assumption corresponds to excluding the effects of the drive system on the isotropy, thus assuming that it depends solely on the geometry of the robot and its position reached [6]. This practice inevitably leads to an incorrect formulation of the problem and an inaccurate evaluation of the isotropy of the system [2]. The article analyzes this problem in depth by introducing the definition of Generalized Jacobian Matrix (J*): unlike the Jacobian matrix, it allows to evaluate the real isotropy of a manipulator taking into account the effects of the drive system on the performance of the robot. 2. ISOTROPY OF THE MANIPULATOR Robot performances are usually measured with reference to the Jacobian matrix J. The function: f ( x, q) = 0 (1) shows the relationships between the coordinates of the joint space q and those of the work space x . Deriving equation (1) we obtain: where: f J x = ; x The Jacobian matrix J can be expressed as: J x x& = J q q& (2) f J q = (3) q J J 1 = q J x (4) linking the velocities of the joint space q& with those of the work space x& as: q & = J x& (5) Thanks to the so-called kinetostatic duality [ ], the transposed Jacobian matrix represents the relationship between the forces and couples acting on the end effector F a and the forces and couples exerted by the actuators F q : T F a = J Fq (6) The kinetostatic properties of a manipulator, as a function of its position in the work space, can be analyzed through some indices related to the Jacobian matrix [2,5]. Isotropy is an interesting property of a manipulator, since it defines the behavior of the robot in each direction. Remembering that the ith singular value σ i (A) of a matrix A is defined as the square root of the eigenvalue λ i of the corresponding matrix A T A: 102

139T σ i ( A) = λ i ( A A) (7) where λ i 0, the isotropy can be measured through the index: λmax σ max I = = = cond( J ) (8) λmin σ min joints connected to ground (O 1, O 2 ) via two sliders. The actual configuration allows the two joins to be coincident. This configuration allows us to have the largest workspace, which is a circle with its center at the origin (O 1 O 2 ) and radius R=L 1 +L 2, which is the condition number of the Jacobian matrix. When cond(j)=1 is verified, the minimum and maximum eigenvalues ​​coincide and the manipulator is defined as isotropic. The isotropy condition can also be expressed as [5]: J T J = ki (9) where k is a scalar and I is the identity matrix. That means isotropy can be achieved when the Jacobian matrix is ​​proportional to an orthogonal matrix. L 2 L 2 This definition, however, is made under the assumption that the behavior of all actuators is independent of the robot's pose and is the same for all actuators. This assumption corresponds to excluding the effects of the drive system on the isotropy, thus assuming that it depends only on the geometry of the robot and its position reached. Also, this classical definition does not consider that some of the pincer and joint coordinates describe rotations and others describe translations, so it uses different units (for example, degrees and meters). This practice inevitably leads to an incorrect formulation of the problem and an inaccurate discussion of system properties such as isotropy. To overcome these problems it is possible to introduce some characteristic lengths used to normalize the dimension of the manipulator; one length is used to correlate the rotation of the TCP with its translation, while a second length is used to compare revolved and prismatic actuators [7]. However, the choice of the value of these parameters is arbitrary and some criteria must be developed to select reasonable values. The article analyzes in depth the problem of comparing different actuators by introducing the definition of Generalized Jacobian Matrix: unlike the Jacobian matrix, it allows to evaluate the real isotropy of a manipulator, taking into account the effects of the drive system on the robot's performances. To better understand these concepts, a case study is presented: it is a 5R parallel kinematics machine with 2 degrees of freedom consisting of 4 links (5 considering the ground) connected by five revolutionary joints (R) two of which They are located on the ground and driven by motors. (Figure 1). It is made up of 4 main elements: 1. the support (light grey), which is fixed and connected to the ground, 2. the drive system (green), made up of 2 brushless motors, each of which actuates a joint, 3. the transmission (dark grey), which changes the torque and speed supplied by the motor to those requested in the joints 4. the manipulator (light blue), a machine made up of 4 connected by five revolutionary joints The position x = [x e ; and e ] T of joint C can be expressed as a function of the coordinates of the driven joints q = [θ 1 ; θ 2 ] T. Figure 2 shows a developed manipulator prototype, the main feature of which is the opportunity to change the distance between the L 1 L 1 Figure 1 5R 2 dof PKM Figure 2 5R 2 dof PKM with matching joints connected to the ground ( O 1 O 2 ) The graph represented in Fig.3 highlights the trend of the inverse of the conditioning number of the Jacobian matrix for the described configuration, within the half work space. It is observed that the locus of the points where the manipulator is in an isotropic configuration is a circle. The isotropic behavior depends only on the distance of the end effector from the origin, while it does not depend on the direction. Figure 4 is related to a robot configuration corresponding to the position of the mismatched joints. The workspace is reduced and the manipulator behavior is no longer radial symmetric. 103

1401. Geometric isotropy, is reached when the manipulator, regardless of its drive system, is in an isotropic configuration. In this case: cond ( J ) = 1 or J T J = ki (11) 2. Isotropy of the actuation system, it is achievable if the behavior of the actuation system is the same in all the configurations reached by the manipulator. It is true: cond (D) = 1 or D T D = ki (12) 3. Effective isotropy, is when the robot, driven by a defined drive system, behaves isotropically. In this condition, regardless of the condition number of J and D, it is obtained: T T cond ( JD) = 1 or J D DJ = ki (13) Figure 3 Evaluation of the isotropy of the robot within the workspace by means of the inverse of the number of conditioning of the Jacobian matrix (case O 1 O 2 ). Figure 4 Evaluation of the isotropy of the robot within the work space through the inverse of the conditioning number of the Jacobian matrix (O 1 O 2 case). 5. THE GENERALIZED JACOBIAN MATRIX Usually in kinematic optimization the effect introduced by the behavior of the drive system is not considered. One of the most frequent cases is when the actuators are not identical (ie different maximum speeds, different maximum torques, etc.). In this case, instead of analyzing the matrix J T J, or its inverse, it will be necessary to consider the generalized Jacobian matrix: J*=JD (10) where D is a matrix (generally diagonal) to be defined in order to adequately weight the different contributions of Q & or F q [6]. Therefore, it is fundamental for the design of the kinematics of a robot, to evaluate the performance indices previously presented to the matrix DJ instead of J. Actuators with different performance When the motors used to actuate the manipulator are all of the same type ( rotational or linear), it is often assumed that they all have the same maximum performance both in terms of speed and torque (or force). Conversely, a manipulator can be driven by actuators of the same type, but different from each other in terms of performance. To visualize this fact, one should consider the generalized Jacobian matrix instead of the Jacobian one. The D matrix must be defined by introducing suitable weights to normalize the actions of the actuators. Such a definition is arbitrary and there is no universal choice that is suitable for all situations. In [2] it is suggested to define two matrices, D v related to velocities and D f related to forces, as: 1 0 K q& 1,max 1 D v = 0 L (14) q& 2,max M M O 1 0 K f 1,max 1 D f = 0 L (15) f 2,max M M O where q& i, max and fi,max are respectively the maximum speed attainable by the ith motor and the maximum deliverable force. This choice gives a physical and concrete value to the scale factors since it depends on the characteristics of the actuators themselves. Figure 5 shows the manipulator driven by two actuators with different performance (q and 1, max = 1.2 q and 2, max). The effect described by matrix D can be better explained by recalling some definitions about isotropy [6]: 104

141Transmissions with different gear ratio M 1 M 2 Figure 5 5R 2 dof PKM with grounded mating joints (O 1 O 2 ) driven by two different motors. An effect that is usually neglected in the study of the isotropy of the manipulator is due to the presence of interposed transmissions between the robot structure and the actuators. Such transmissions change the torques/forces that the motors exert on the structure, as well as the speed that they impose. This effect necessarily changes the kinetostatic properties of the robot and could not be observed by analyzing only the Jacobian matrix of the manipulator, but a generalized one should be adopted. * While the motors exert torques F a and speeds Q*, on the driven joints 1 * * forces F a = D are applied to F a and speeds Q & = Dv Q & where the matrices D v, D f are defined as: τ 1 0 D v = (16) 0 τ 2 1 1/ τ1 0 D f = D v = (17) 0 1/ τ 2 Figure 8 shows a particular of the transmission systems of the considered manipulator. While the two engines are identical, the M 2 engine is connected to a belt drive to the driven joint. If τ 1 then it results: τ 1 τ 2 (18) Suppose that τ 2 =2τ 1, the effects on the isotropy of the robot are shown in Fig. 9.10. For both cases, the isotropic behavior worsens dramatically with respect to Fig.3.4. Figure 6 Effects on the isotropy of the robot of actuators with different performance. Evaluation by means of the inverse of the determining number of the Jacobian matrix. (O 1 O 2 case). τ M 1 M 2 Figure 8-5R 2 dof PKM with mating joints connected to ground (O 1 O 2 ). A particular transmission. Figure 7 Effects on the isotropy of the robot of actuators with different performance. Evaluation by means of the inverse of the determining number of the Jacobian matrix. (O 1 O 2 case). 105

142REFERENCES Figure 9 Effects of different gear ratios on the isotropy of the robot within the work space. Evaluation by means of the inverse of the determining number of the Jacobian matrix. (O 1 O 2 case). [1] T. Yoshikawa, Fundamentals of Robotics: Analysis and Control, 1990, MIT Press, Cambridge, MA. [2] Legnani, G., Robotica Industriale, Casa Editrice Ambrosiana, [3] Salisbury, J.K., Craig, J. J., Articulated Hands: Force Control and Kinematic Issues, International Journal of Robotic Research, Vol.1, No. 1, 1982 , pp [4] Hanrath, G., Stengele, G., Machine Tool for Triaxial Machining of Work Pieces, United States Patent No. B1 (Specht Xperimental 3.axis Hybrid MC), [5] Merlet J.P., Jacobian , Manipulability, Condition Number and precision of parallel robots, J. of Mechanical Design, Vol.28, 2006, pp [6] Giberti H., Cinquemani S. Chatterton S. How the drive system affects the kinetostatic properties of a robot , Proc. of the 2nd International Multiconference on Engineering and Technological Innovation: IMETI 09, Florida, USA, Figure 10 Effects of different gear ratios on the isotropy of the robot within the workspace. Evaluation by means of the inverse of the determining number of the Jacobian matrix (case O 1 O 2 ). CONCLUSION The kinetostatic performance of the manipulator can be analyzed through indices related to the Jacobian matrix, especially in terms of isotropy. Generally, this investigation is carried out considering only the geometric structure of the robot, neglecting the effect of the drive system. This approach can lead to errors and mistakes for the manipulators themselves. Considering forces or velocities, the Generalized Jacobian Matrix, obtained by identifying appropriate matrices D f and D v, allows adequately weighting the different velocity and force contributions of each actuator. This operation is performed using some parameters related to the performance of the actuators themselves and therefore gives the matrix D a physical meaning. Furthermore, it can describe the possible inhomogeneous behavior of the drive system that depends on the configuration achieved by the robot. 106

143Classification of servomotors based on the acceleration factor Hermes GIBERTI, Simone CINQUEMANI Department of Mechanical Engineering, Politecnico di Milano, Campus Bovisa Sud, via La Masa 34, 20156, Milano, Italy ABSTRACT This paper focuses on the analysis of the so-called acceleration factor (α) [9,10] defined, for each motor, as the ratio between the square of the nominal torque of the motor and its moment of inertia. The coefficient α is defined exclusively by motor-related parameters and therefore does not depend on the task of the machine: it can be calculated for each motor using the information collected in the manufacturer's catalogs. There is currently no theoretical study investigating the dependence of the throttle factor on the electromechanical characteristics of the motor. One way to investigate these relationships is to collect catalog information for a significant number of engines produced by different manufacturers. This allows having a statistical population on which to carry out the corresponding analyses. For this reason, a database of more than 300 brushless motors has been created that contains, for each record, information on the most important electromechanical characteristics. With the information collected, graphs are produced that show how motors that are the same size have different acceleration factors. Keywords: Electric servomotor; accelerating factor; continuous service power regime 1. INTRODUCTION The need to increase production capacity while maintaining quality standards requires the implementation of increasingly high-performance automatic machines. In this context, the correct selection of the gear motor group is of strategic importance in the design phase of the machine. Unfortunately, the choice of electric motor required to handle a dynamic load is closely related to the choice of transmission. This operation, in fact, is subject to the limitations imposed by the working range of the motor and is subject to a large number of conditions that depend indirectly on the motor (through its inertia) and on the reducer (through its ratio of transmission, its efficiency and its inertia), the selection of which is the object of design. In the literature there are many procedures for the selection of a geared motor unit [1-8] that, although they all start from the same theoretical basis, differ in their approach to the problem. This work focuses on the analysis of the continuous service power rate (also called acceleration factor) [8] defined, for each motor, as the relationship between the square of the nominal torque of the motor and its moment of inertia. Each manufacturer of brushless synchronous motors adopts its own technological solutions and its construction scheme, generally different from that of another producer. However, the designer of an automatic machine who has to choose a motor can consider all motors as black boxes characterized by their acceleration factor. Currently there is no theoretical study that investigates the dependence of the acceleration factor on the electromechanical characteristics of the motor, so the comparison of motor performance in terms of its acceleration factor is only possible in relative and not absolute terms. The objective of this work is to lay the foundations for a deeper analysis of the acceleration factor, in order to give the designer of an automatic machine a tool to critically evaluate the performance of the motors, not only in comparison with the others available, but also in absolute terms. Table 1 Nomenclature T M motor torque J M moment of inertia of the motor T M,rms motor root mean square torque T M,N nominal torque of the motor TH TM, max theoretical maximum torque of the motor T M,max maximum torque of the servomotor ω M angular velocity of the motor ω M ,N rated angular velocity of the motor ω& M angular acceleration of the motor P N rated power of the motor m mass of the motor V N rated voltage of the motor p motor poles T L load torque J L moment of inertia of the load * T L generalized load torque * T L ,rms root mean square torque of generalized load T L,max ω L ω& L ω& L,rms τ η α β ω M,max ω L,max t a C th R th τ th K T i load maximum torque load angular velocity load angular acceleration root mean square load acceleration gear ratio transmission mechanical efficiency acceleration factor load factor maximum speed achievable by the motor maximum speed reached by the load cycle time thermal capacity of the motor thermal resistance of the motor constant torque thermal constant current flowing in the motor windings 107

1442. THE MOTOR Brushless motors (Fig.1) are the most widespread electric actuators in the field of automation, whose working range (Fig.2) could be roughly subdivided into a continuous working zone (called S1, limited by the motor nominal torque) and a dynamic one (called S6, limited by the maximum motor torque T M,max ). In general, the nominal torque of the motor decreases with the motor speed ω M. To simplify the trend of the nominal torque and to have a precautionary approach, the continuous working range is approximated to a rectangle, identifying two values ​​T M,N and ωM,max (Fig. 3). Notice how the approximation used to make the field S1 rectangular actually has consequences on the value of T M,N and ω M,max. The information in the catalogs is usually scarce and, in the best of cases, when the speed/torque curve is available, they must be managed to obtain the parameters of interest. Observe how the maximum torque reached by the servomotor TM,max depends to a great extent on the drive associated with it and, in general, is different from the theoretical maximum torque of the motor TH TM,max. T M T M,max T M,N T M Fig.1 - Commercial brushless motor T M,max T M,N ω M,max Figure 3 Approximate speed/torque curve ω M,max Fig.2 - Speed/torque curve of a common brushless motor brushes ω M At at low speed, the restriction introduced by the drive systems is related to the maximum current supplied to the motor. Since the torque depends on the current, this limit is translated into a horizontal line in the working field of the motor corresponding to a maximum torque different from the theoretical one. At higher speed, this restriction is overcome by the maximum bearable voltage condition, which causes a reduction in the maximum torque of the motor with its speed. 3. THE THERMAL PROBLEM OF ELECTRIC MOTORS The thermal problem is of great importance in electric motors, and is generally the most binding condition in the choice of an electric motor for industrial applications. During their operation, in fact, motors waste power W d in the form of heat: this is mainly because the windings are affected by current flow (copper losses), but also by eddy currents (iron losses ) and mechanical effects. 108

145The power lost in the form of heat determines an increase in engine temperature. The heat is partially removed from the environment at least until a steady condition is determined. Calling θ(t) the temperature difference at time t between the motor and the environment, C th the thermal capacity of the motor and R th its thermal resistance, the differential equation for power balance is: which can be rewritten as: where: d C θ th + θ = Wd (1) dt Rth θ d τ th + θ = Wd Rth (2) dt τ th = RthCth (3) is the thermal time constant of the motor (usually defined by the manufacturer and available in catalogues). Observance of the restrictions related to the thermal problem requires, when selecting a motor, that the maximum temperature reached during operation does not exceed the maximum admissible. This requires solving Eq. (1). However, if the task operation is cyclical, with period t a <<τth the problem can be simplified. In this case, the motor cannot follow the rapid thermal fluctuations of power dissipation, due to the high heat resistance. The motor temperature, then, evolves as if it were subject to a constant power dissipation W d equal to the average power dissipated in the cycle. Assuming that the dissipation is mainly related to the Joule effect due to the resistance R, it is: W d = R ta t a 0 2 i dt (4) TM = KT i (5) where K T is the torque constant. Substituting equation (5) into equation (4), it is possible to reach the value of the so-called root mean square torque: The torque TM can be written as: where: TM = τt * L + J M * TL = TL + J Lω& L & ωl τ is the generalized resisting couple on the load axis. When selecting the gear motor group, the transmission ratio τ and the inertia of the J M motor are still unknown. At this stage, the transmission is considered ideal (η=1). Equation (8) highlights the dependence of motor torque on these variables, while from equation (9) it is possible to see that all terms related to load are known. The root mean square pair is obtained from: ta 2 t a 2 2 TM 1 * & ωl TM, rms = dt = τtl + J M dt ta ta τ 0 0 (8) (9) (10) Expanding the term in parentheses and using the properties of addition of integrals, it is possible to arrive at the root mean square pair as: ( T L & ω L ) mean J M TM 2 2 * 2 2 rms TL rmsτ & 2 *, =, + ω L, rms + J 2 2 M (11) and inequality (7) can be written as: τ ( T L & ω L ) mean J T M M 2 2 * 2 2 N TL rmsτ & 2 *,, + ω L, rms + J 2 2 M (12 ) τ 4. THE ACCELERATION FACTOR OF THE MOTOR Given that TM,N is positive by definition, we can obtain: ( T L & ω L ) mean 2 * 2 2 TM N TL rmsτ & 2,, ωl, rms * + J J M + 2 (13 ) M J 2 M τ Let us introduce the acceleration factor of the motor: TM, rms = t a 1 2 TM dt ta 0 (6) TM 2, N α = (14) J M that is, the torque, which acts constantly during the cycle, which is attributable to the total energy dissipation actually occurred in the cycle. The condition of the thermal problem becomes: T M,rms < T M,N (7) where the motor torque T M,N can be obtained from the catalogs provided by the motor manufacturers and is defined as the torque that the motor can supply during an infinite time, without overheating. describing the performance of each motor, and the load factor: β [& * ω T ( T & ω ) ] 2 * = 2 L, rms L, rms + L L mean (15) defining the performance required by the task. The unit of measure for both factors is W/s. The coefficient α is defined exclusively by motor-related parameters and therefore does not depend on the task of the machine: it can be calculated for each motor using the information collected in the manufacturer's catalogs. In addition, it could be reported on them, to give a classification of commercial motors in 109

146the basis of this standard. Otherwise, the coefficient β depends solely on the working conditions (applied load and law of motion) and is a measure that defines the power required by the system. Substituting α and β into inequality (13) we arrive at: 2 * τ J,, M α β + T L rms & ωl rms (16) τ J M Data analysis Figure 4 represents the trend of the acceleration factor (y-axis) for the entire population of motors considered (x-axis). Engines are identified by a unique growth rate. Notice how α can take on really different values ​​and how some motors have an extremely high acceleration factor compared to the population considered. Since the term in parentheses is always positive or zero, the load factor β represents the minimum value on the right hand side of equation (16). This means that the motor acceleration factor α must be sufficiently greater than the load factor β, for inequality (7) to hold. A motor should be rejected if α<β, whereas if αβ the motor can have enough rated torque if τ is chosen correctly. The preliminary motor choice is made by comparing only the α and β values; these values ​​are easily calculated knowing the motor's mechanical properties and load characteristics. 5. COMPARISON OF SERVOMOTORS The aim of this paper is to lay the foundations for a detailed analysis of the acceleration factor (or continuous service power index). The starting point is the answer to the question: Suppose motors with different sizes are difficult to compare, can similar motors have extremely different acceleration factors α? A negative answer to this question would make any further consideration unnecessary, indicating that the manufacturing parameters have a marginal influence on the acceleration factor. That means that commercial brushless motors currently on the market have similar electromechanical characteristics, presumably more suitable for obtaining high values ​​of α. On the contrary, a positive answer would open a field of investigation to find out which are the electromechanical characteristics of a motor that most influence the acceleration factor and which is (if it exists) the theoretical or technological value of α whose overtaking is technically or impossible. No. convenient. One way to answer the question is to collect enough information from different manufacturers' catalogs for a significant number of engines. The resulting database will be a useful instrument to compare different commercial devices and a suitable tool to highlight how the acceleration factor cannot be the only parameter to describe the performance of a motor and how all the characteristics of the motor influence the design of the motor. a machine. The database The database collects the main information available in catalogs of about 300 motors whose power is between 15[W] and 15[kW]. The information collected refers to: brand, model, type of motor (AC or DC), torque coefficient, electrical resistance of the winding, number of poles, geometric dimensions and, naturally, the nominal torque of the motor and the moment of inertia of the motor. rotor. The moment of inertia J M includes the inertia of the rotor and that of the positioning sensor, a necessary component for the operation of the machine and therefore part of it. The inertia of any braking system or related to any additional sensor is neglected. Figure 4 Acceleration factor (α) for motors in the database The graph cannot highlight whether these high acceleration factor values ​​are due to high rated torque, small rotor inertia, or a combination of the two factors . It is also not clear if the acceleration factor is related to the size of the motor or not. For this reason, the values ​​of α, the nominal torque of the motor T M,N and the moment of inertia of the motor J M are reported in the same graph for all motors in the database (Fig. 5). For ease of reference, the motors are ordered with increasing moment of inertia. The three data series are normalized to their respective maximum value to allow a comparison between series. Figure 5 Normalized acceleration factors (α), nominal motor torques (T M,N) for motors considered ordered with increasing moment of inertia J M Figure 6 shows the trends of motor weight (M) and nominal torques (TM, N ) for motors considered ordered motors with increasing moment of inertia J M. 110

147Actually, this conclusion is the starting point to investigate what are the electromechanical characteristics that allow a motor to have a higher performance. Figure 6 Normalized masses (Μ), rated motor torques (T M,N) for motors considered ordered with increasing moment of inertia J M Looking at the graph in Fig. 5, 6 some interesting considerations can be made: 1. High values of the acceleration factor can be obtained, even with high values ​​of J M, due to the increase in the nominal torque of the motor; 2. The rated torques of the motor and the inertia of the rotor seem to be proportional to each other; 3. Motors with the same moment of inertia can have extremely different acceleration factors; 4. The moment of inertia and the mass of the motors seem to be proportional These considerations give a first answer to the question posed: commercial brushless motors are built with different designs and generally have different performances. It means that some engines are better than others. Table 2 shows, by way of example, the main characteristics of the motors classified as #44 and #230. Despite their different characteristics and dimensions, the two motors are identical at least as regards their acceleration factors. Table 2 Comparison between two selected motors Let us now consider the transmission that could be coupled to each motor, such that the thermal problem condition is verified. The range of suitable gear ratios can be calculated by solving equation (16). It results: J τ = T M * L,rms α β (17) * where T L,rms and β do not depend on the motor. The motors in Table 2 have similar acceleration factors, but the moment of inertia of the rotor is actually different. Suppose your acceleration factors were higher, for a given task, than the load factor. Then the No. 230 engine, with a higher moment of inertia, would have a wider range of useful gear ratios than the engine without. CONCLUSIONS The acceleration factor, or continuous service power rate, is a parameter that characterizes the performance of an engine. and is defined as the ratio between the nominal torque of the motor and the square of its moment of inertia of the rotor. The greater the acceleration factor α, the greater the range of gear ratios that can be used to couple the motor to the load to be moved. The designer who is choosing the geared motor unit, however, finds it difficult to understand if the choice made is the best one or not, since there are no absolute references on the acceleration factor on which to make the selection. In other words, it is impossible, at this time, to assess whether the chosen motor is the best solution for an application, or whether a smaller one with the same acceleration factor, and therefore better in weight and dimensions, is available on the market. trade. The analysis reveals how motors for the field of automation (considering only brushless synchronous motors) are extremely heterogeneous in terms of performance and highlights the need to define benchmarks for the acceleration factor to help the designer select the best engine-transmission coupling. 7. REFERENCES [1] Pasch K.A., Seering W.P. On drive systems for high-performance machines, Transactions of ASME Vol.106, 1984, pp [2] Van de Straete H.J, Degezelle P., de Shutter J., Belmans R., Servo Motor Selection Criterion for Mechatronic Application, IEEE/ASME Transaction on mechatronics Vol.3, 1998, pp [3] Van de Straete H.J, de Shutter J., Belmans R., An Efficient Procedure for Checking Performance Limits in Servo Drive Selection and Optimization, IEEE/ASME Transaction on mechatronics Vol.4, 1998, pp [4] Cusimano G., A procedure for a proper selection of laws of motion and electrical drive systems under inertial loads, Mechanism and Machine Theory Vol.38, 2003, pp

148[5] Cusimano G., Optimization of the choice of the electrical drive-device-transmission system for mechatronic applications, Theory of Mechanisms and Machines Vol.42, 2007, pp [6] Cusimano G., Generalization of a method for the selection of drive systems and transmissions under dynamic loads, Mechanism and Machine Theory Vol.40, 2005, pp [7] Van de Straete H.J, de Shutter J., Leuven K.U. Optimal Variable Gear Ratio and Trajectory for Inertial Load with Respect to Servomotor Size, Transaction of the ASME Vol.121, 1999, pp [8] Roos F., Johansson H., Wikander J. Optimum selection of motor and reducer in mechatronics application, Mechatronics Vol.16, 2006, pp [9] Legnani G., Tiboni M., Adamini R. Meccanica degli azionamenti, Esculapio, Italy, 2002 [10] Giberti H., Cinquemani S., Legnani G., Evaluation of geared motor coupling in highly demanding industrial applications, Proc. of the 2nd International Multi-Conference on Engineering and Technological Innovation: IMETI 09, Florida, USA,

1491 Impact of hydroelectric plant data quality on the analysis of past operations using a medium-term simulation tool Ieda G. Hidalgo 1, Secundino Soares F. 2, Darrell G. Fontane 3, Marcelo A. Cicogna 4 and João E. G. Lopes 5 Ph. D. Electrical Engineering, Department of Electrical and Computer Engineering, State University of Campinas (UICAMP) Brazil; iedahidalgo@gmail.com 2 Professor, Department of Electrical and Computer Engineering, State University of Campinas (UICAMP) Brazil; dino@cose.fee.unicamp.br 3 Professor, Department of Civil and Environmental Engineering, Colorado State University (CSU) USA; darrell.fontane@colostate.edu 4 Ph.D. Electrical Engineering, Anhanguera Educacional (AESA) Brazil; marcelocicogna@gmail.com 5 Doctorate. Civil Engineering, Consulting Engineers Brazil; jelopes1@gmail.com ABSTRACT This article presents the impact of the data quality of hydroelectric plants in the analysis of their past operation. A medium-term simulation tool has been applied to Brazilian hydroelectric plants that are under the coordination of the National Electric System Operator (ONS). To analyze the impact of the quality of the data, this simulator is used, which reproduces the past operation of the plant, once with official data and average productivity and another time with adjusted data and variable global efficiency. The results show that the use of both consolidated data and variable global efficiency reduces the errors between the recorded and simulated variables, bringing the simulated and real operation of the plant closer together. Keywords: hydroelectric plants, data quality, simulation tool, Brazilian model, operation planning. 1. ITRODUCTION The energy generated by a hydroelectric plant is a function of the discharge of water from the turbines, the difference between the levels of the loading and unloading chamber, the head loss of the penstock and the efficiency of the machines. involved in the process. The level of precision used to represent the hydroelectric power generation function depends on the problem addressed. For example, in medium-term planning, it is quite common to consider an average productivity of the plant. While, in the short term, a variable efficiency for each machine is usually considered. This article focuses on the middle ground and its characteristics. The main objective is to compare the past performance analysis of a plant alternating: official data and consolidated data. The tool used to reproduce the past operation is a medium-term simulator for the operation of hydroelectric power plants. The first section presents the Brazilian model for hydroelectric operation planning in the medium term. The second details the concept and calculation of variable global efficiency. The third describes the middle ground simulator used in this article. The fourth shows a graphical and numerical analysis of the impact of data quality on the replay of past operations. The conclusions are given in the fifth section. 2. BRASILIA MODEL FOR PLANNING HYDROELECTRIC OPERATION IN THE MEDIUM TERM The most common formulations of the Brazilian system for planning hydroelectric operation include: the production function and the water balance equation [12]. The goal of the production function is to quantify the power generation of a hydroelectric plant, such as Eq. (1). p=k. η.[ h ( x) h ( u) h ] q (1) where: p fb tr pl. It is the instantaneous power obtained in the conversion process of the hydraulic 113

150potential energy k to electrical energy (MW). It is the gravity constant, multiplied by the specific weight of water and divided by Its value is (MW/(m³/s)/m). It is the elevation of the loading chamber that is a function of water storage x (m). u It is the water discharge from the plant, that is, the sum of the water discharged by the turbines and the water discharged (m³/s). h tr (u) It is the elevation of the discharge channel that is a function of the release of water u (m). h pl It is the head loss of the forced pipe that is a function of the water flow (m). q It is the discharge of water by the turbines of the power house (m³/s). η x h fb (x) It is the constant or variable efficiency of the plant in the process of converting mechanical energy into electrical energy. It is the storage of water in the reservoir of the plant (hm³). 3. CONCEPT AND CALCULATION OF VARIABLE GLOBAL EFFICIENCY Global efficiency includes losses and efficiencies involved in the operation. Using the global efficiency simplifies the production function, Eq. (1), without compromising the planning and operating history of the plant. Figure 1 presents the simplification of the equation. (1) using the concept of global efficiency, η G. The water balance equation, Eq. (2), is used to calculate the conservation balance of the reservoir's water mass. x = x0 + y+ j Ω where: x 0 y Ω s ev uc t u j t ( q+ s+ ev+ uc) (2) 6 10 It is the volume of the reservoir at the beginning of period t (hm³). It is the incremental inflow of water into the reservoir during period t (m³/s). It is the index of upstream plants established from the analyzed plant. It is the water discharged during the period t (m³/s). It is the evaporation of the reservoir during the period t (m³/s). It is the use of water from the reservoir for non-energy generation purposes, such as: urban water supply, irrigation and navigation during period t (m³/s). It is the size of the period t(s). Fig. 1. Production function, Eq. (1), using the concept of global efficiency. The variable global efficiency is represented as a matrix of hill curves. This matrix can be a function of gross height and power output. To achieve the global efficiency matrix using the data recorded by the plant, an optimization method can be used, such as the Solver tool in Excel [3]. The objective function is to optimize the cells of the global efficiency matrix to minimize the sum of the squared error between the global efficiency of the plant for the selected record and the calculated global efficiency, as Eq. (3). Min where: n i= 1 G G [ ( i) η ( i) 2 η calc (3) ] To facilitate the calculation of the parameters involved in the equations presented above, seven physical functions are used: area-level polynomial, level- volume polynomial, level release polynomial, maximum power function, maximum water discharge function, efficiency function and forced pipe head loss function [1]. n i η G (i) Is the number of operations registered in the plant's database. It is the index of the operation registered in the database of the plant. Is the global efficiency of the plant for the registration of the index i. 114

151η G (i) calc It is the global efficiency of the plant calculated using the global efficiency matrix for energy production and the gross head of the record of the index i. 4. MEDIUM TERM SIMULATOR The medium-term simulator for the operation of hydroelectric power plants represents in detail the active operating restrictions in this horizon using weekly or monthly data. It can be used to plan future operation or reproduce the past operation of a period. When it is used to reproduce the past operation, it fulfills the function of a data analysis tool like others mentioned in Hidalgo 2004 [5]; Hidalgo et al A [6]; Hidalgo et al B [7] and Hidalgo et al C [8]. His simulation process is based on the production function, Eq. (1), and the water balance equation, Eq. (2). The software project and the computational implementation of this simulator use the Object Oriented Paradigm [3], the C++ Programming Language [10] and the Structured Query Language (SQL) [4]. For the studies of this work, the simulator aims to reproduce the trajectory of water discharge from the initial volume, the trajectories of generation, water discharge and water inlet, as shown in Fig. 2. The Advantages of this type of application are: it shows the impact of data inconsistency on the water balance of the plant and can be used to plan future operation. done in two situations. In the first, the simulator worked with the official data provided by the company responsible for the operation. In the second, the simulator used the consolidated data obtained according to the methodology presented in Hidalgo et al D. Basically, the differences between the official and consolidated data are the six physical functions involved in the planning of the hydroelectric operation: area level polynomial , volume level polynomial , level release polynomial, maximum power function, maximum water discharge function, and efficiency function. A - Simulation using official data and average productivity In this simulation, the official physical data of the company and the average productivity of the plant were used. The aim of the simulator is to reproduce the water discharge trajectory recorded by the plant, like Fig. 2. Figs. 3, 4 and 5 show the recorded and simulated trajectories. It is possible to notice that the official physical data recorded by the plant are not consistent with the reality of the operation because the simulated trajectories were far from the recorded trajectories. Fig. 3. Comparison between global efficiency trajectories. Fig. 2. In bold, input data to the simulator when its objective is to reproduce the trajectory of water discharge. 5. GRAPHICAL AND UMERICAL ANALYSIS OF THE IMPACT OF DATA QUALITY ON THE REPRODUCTION OF PAST OPERATION The medium-term simulator was applied to a large Brazilian hydroelectric plant. The data recorded by the plant from 09/01/06 to 08/31/07 were compared with the data resulting from the simulation for the same period. The comparison was Fig. 4. Comparison between the trajectories of water discharge. 115

152Fig. 5. Comparison between the trajectories of the level of the loading chamber. The physical information that most influenced the result of this study was plant productivity, Fig. 3. Being overestimated, the simulator saves water from the reservoir, Fig. 4, to produce the recorded generation. This justifies the increase in the level of the loading chamber, Fig. 5. B - Simulation using consolidated data and variable global efficiency In this simulation, the consolidated physical data of the plant and the variable global efficiency calculated according to Eq. (3). Again, the aim of the simulator is to reproduce the water discharge trajectory recorded by the plant, like Fig. 2. Figs 6, 7 and 8 show the recorded and simulated trajectories of the plant. Fig. 6. Comparison between global efficiency trajectories. Fig. 7. Comparison between the trajectories of water discharge. Fig. 8. Comparison between the trajectories of the level of the loading chamber. These three figures show a strong coherence between the recorded data and its reproduction by simulation. This demonstrates the advantages of using variable global efficiency even in the medium term. To show a numerical analysis of the results presented above, Table I presents a statistical summary of the sum and mean square error between the recorded and simulated variables when the simulation objective was the reproduction of the water discharge. Table I. Statistical summary Water discharge reproduction Recorded data x Simulated data Variable Adjusted official reduction Root mean square error h fb (x) % h tr (u) % q 4.609, , % s 3.713, % η G % Root mean square error h fb ( x) h tr (u) q s η G Mean reduction of the squared error of the sum % The first column presents the variables analyzed. The second shows the sum and the mean square error between the recorded data and the simulated data using official information and the average productivity of the plant. The third shows the sum and root mean square error between the recorded data and the simulated data using consolidated information and the variable global efficiency of the plant. The fourth column presents the error reduction between the second and third columns. The numbers in Table I show that the quality of the data has a great influence on the past operation 116

153plant analysis. Some minor differences presented in the third column of the table are explained by the 0.1 MW precision used by the simulator. The differences related to the level of the discharge channel and the water discharge are explained by the fact that the discharge channel of the plant is represented by a cloud of scattered points. A precision technique for measuring water discharge is believed to reduce the gaps between cloud points, further improving the results. 6. COCLUSIONS This article compared the impact of data quality in the analysis of past operation of a large Brazilian hydroelectric plant. The analysis was performed using a medium term simulator. The objective of this simulator was to reproduce the operation of the plant from 09/01/06 to 08/31/07 using monthly data. The simulator input data was classified into two types: official data using the mean productivity of the plant and consolidated data using the variable global efficiency of the plant. The results were presented in the form of graphs and tables. All of them confirmed the importance of data quality. For the study analyzed, the average reduction of the squared error of the sum between the registered and simulated variables was 99.16%. The impact of data quality on the analysis of past operations indicates that other computational models used by the energy sector for flow optimization, simulation, and forecasting may present questionable results due to the quality of the data provided for them. Therefore, the search for data improvement is important for the choice of an economic and reliable operation policy for the hydroelectric system. 7. ACKNOWLEDGMENTS The research reported here was financed by CNPq and CAPES, Brazilian Government agencies dedicated to the development of science and technology, which have financed, at different times, the Ph.D. Studies of the first writer. 8. REFERENCES [2] Bloch, S. C. (2003). Excel for engineers and scientists. 2nd edition, Wiley, USA. [3] Cicogna, M.A. (1999). Model for planning the energy operation of hydrothermal systems through object-oriented programming. Master's Thesis, State University of Campinas, SP, Brazil. [4] Elmasri, R. and Navathe, S. (2000). Fundamentals of database systems. Addison Wesley Professional, MA, USA. [5] Hidalgo, I.G. (2004). Search system for data analysis of hydroelectric power plants. Master's Thesis, State University of Campinas, SP, Brazil. [6] Hidalgo, I. G., Soares Filho, S., Fontane, D. G. and Cicogna, M. A. (2009-A). Management and analysis of data from hydroelectric plants. In 2009 IEEE Power Systems Conference & Exhibition (IEEE), Washington / Seattle pg. 1 to 6. [7] Hidalgo, I.G., Fontane, D.G., Soares Filho, S. and Cicogna, M.A. (2009-B). Computer-assisted system for the management, control and data analysis of hydroelectric power plants. In 2009 World Environmental & Water Resources Congress (ASCE), Missouri / Kansas pag to [8] Hidalgo, I. G., Fontane, D. G., Soares Filho, S., and Cicogna, M. A. (2009-C). A hydroelectric plant operation simulator as a data analysis tool. In 2009 World Congress of Computer Science and Information Engineering (CSIE) Los Angeles / Anaheim. DOI: /CSIE p. 14 to 18. [9] Hidalgo, I.G., Fontane, D.G., Soares, S., Cicogna, M.A. and Lopes, J.E.G. (2009-D). Consolidation of data from hydroelectric plants. ASCE Journal of Energy Engineering (accepted). [10] Hollingworth, J., Butterfield, D., Swart, B. & Allsop, J. (2001). C++ Builder 5 Developer's Guide. Sams Publishing, USA. [11] ONS. National Operator of the Electric System < [12] Pereira, M. V. F., and Pinto, L. M. V. G. (1985). Stochastic optimization of a multi-reservoir hydroelectric system: a decomposition approach. J. Water Resources Research, 21(6), [1] Arce, A., Ohishi, T. and Soares, S. (2002). Optimal dispatch of the generating units of the Itaipu hydroelectric plant. J. IEEE Transactions on Power Supply, 17(1). 117

154Design of a palpation simulator using magnetorheological fluids Dean H. Kim Julie A. Reyer Department of Mechanical Engineering Bradley University Peoria, IL ABSTRACT This article describes the design and development of a prototype for a palpation simulator, that is, to replicate the act of of touching or feeling the human body to determine the condition of the underlying parts. The work of this project is motivated by the need to have more realistic and versatile palpation simulators. The final design uses magnetorheological (MR) fluids, the properties of which can be changed with magnets to represent the abnormality to be palpated. Key design decisions include the choice of MR fluid, the strength of the magnetic field, and the packaging of the MR fluid. The improved realism and increased number of scenarios for the prototype have been confirmed by local experts. This work has been done by an undergraduate team for the two-semester capstone design course for the Department of Mechanical Engineering at Bradley University. This project crosses the boundaries of traditional mechanical engineering by addressing a critical training need within the medical profession. A significant and unique feature of this project is its sponsorship by the Kern Entrepreneurship Education Network (KEEN). KEEN's mission is to graduate engineers who have the necessary mindset regarding entrepreneurship and innovation. Therefore, the work of this project includes extensive research and market analysis regarding the designed system. Keywords: Medical Simulation, Palpation, Design, Magnetorheological Fluids. INTRODUCTION The use of simulators to train medical professionals in important skills has become more common due to factors such as limited patient availability and the increasing number of scenarios for which training is needed. Medical simulation aims to improve patient education and safety by replacing actual patient experiences with guided experiences under realistic conditions [1],[2]. One such important skill is palpation, which is the act of touching or palpating the human body to determine the condition of the underlying parts. This palpation skill is needed for situations such as feeling for a breast lump, looking for a swollen thyroid, and determining internal bleeding after trauma. Possible variables involved during the medical professional's palpation include the number of fingers used and the pressure applied. Currently available probing simulators are very limited in the important areas of realism and versatility of possible scenarios. For example, a popular breast palpation simulator provides only several possible lump locations and several possible lump sizes, severely limiting the effectiveness of the training. This article describes the development of a palpation simulator using magnetorheological (MR) fluids to represent abnormal anatomy to be detected. MR fluids are a type of smart fluid whose properties, such as viscosity, can change when a magnetic field is applied to them, due to extremely small magnetic particles suspended in the fluid. Design decisions for this probing simulator include the choice of MR fluid, required magnetic field strength, magnet type, and MR fluid packaging. The enhanced realism and increased number of possible scenarios for the working prototype have been confirmed by local experts in related fields, including practicing physicians and the director of the Clinical Skills Laboratory at the local medical school. Other design options were considered for this palpation simulator. One rejected option was the use of pin-sized mechanical actuators to generate the desired palpation surface. However, the achievable resolution of the palpation surface was not fine enough. Another rejected option was the use of electrorheological (ER) fluids, whose properties change when subjected to an electrical current. However, there was great concern for the safety of trainee doctors due to the direct current that was circulated by the fluid. This work has been carried out by a team of four undergraduate students and two faculty advisors as a two-semester capstone design project. It incorporates many aspects of a mechanical engineer's education, such as mechanical design and material selection. This project also crosses the boundaries of traditional mechanical engineering by addressing a critical training need within the medical profession. Therefore, extensive feedback has been obtained from local experts in related fields, including practicing physicians and the director of the Clinical Skills Laboratory at the local medical school. Previous projects have been completed and documented under the same senior project sequence. Each of these projects had focused on the development of a control system for a laboratory wind tunnel [3] and a gear dynamometer [4], respectively. Finally, a significant and unique feature of this project is its sponsorship by the Kern Entrepreneurship Education Network (KEEN). KEEN's mission is to graduate engineers who have the necessary mindset regarding entrepreneurship and innovation. Therefore, the work of this project includes extensive research and market analysis regarding the palpation simulator. 118

155The rest of the paper is organized as follows. KEEN's sponsorship of this project is briefly described, and guidelines are provided for the capstone course in Advanced Design for the Department of Mechanical Engineering at Bradley University. The medical simulators market analysis is presented as stipulated by the sponsorship of KEEN. The technical approach is described, in particular the various concepts and design options to be studied. The final design is presented. Finally, the construction of the proof-of-concept prototype is described and the validation of the prototype with medical professionals is confirmed. KERN ENTREPRENEURSHIP EDUCATION NETWORK The Kern Entrepreneurship Education Network (KEEN), created by the Kern Family Foundation in 2005, aims to help universities graduate engineers who are equipped with an action-oriented entrepreneurial mindset. that will contribute to business success and transform the US workforce. KEEN has provided the funds to sponsor this two-semester senior design project for the Department of Mechanical Engineering at Bradley University. This project incorporates many aspects of a mechanical engineering education, such as systems design, materials science, and design of experiments. This project also crosses the boundaries of traditional mechanical engineering by addressing a critical training need within the medical profession. Therefore, this project is in line with KEEN's mission by including extensive market research and analysis regarding the designed system. BRADLEY UNIVERSITY MECHANICAL ENGINEERING ADVERTISING DESIGN CAPITAL COURSE The Advanced Design Capstone Course for the Department of Mechanical Engineering at Bradley University covers two semesters, specifically the fall and spring semesters of the academic year. Each project has an industry sponsor (ie, a client) and the typical project team consists of four college students and at least one faculty supervisor. For this project, the client (ie, KEEN) is represented by two additional Bradley faculty, including the person directly responsible for securing KEEN's sponsorship. This design project course requires each student team to provide an initial proposal, regular progress reports, four oral presentations to classmates and professors, and at least one client presentation. Each student team's initial proposal includes a timeline with milestones, a budget, and a list of deliverables. For this project, the expected deliverables are: Mid-term report Includes research and analysis for at least 3 potential designs Design recommendations Prototype of functional proof of concept Final report CD containing all project documentation MARKET ANALYSIS Team first has done extensive research on the current market for medical simulators, finding information on the quality, cost, and functionality of the products available. This is a necessary step to show the team what kind of market exists for the proposed probing simulator. Full body simulators are available that can simulate certain human functions such as breathing (via a built-in air pump) and urination. However, these simulators don't feel realistic because they use rubber to mimic human skin. These full-body simulators, shown in Figure 1, typically cost at least $200,000. Partial simulators are also available that provide training for specific tasks and/or for particular sections of the body. These include generic breast exam simulators and palpation simulators, shown in Figure 2 and Figure 3, respectively. However, these simulators only have a limited number of possible anomaly shapes and locations, so their effectiveness diminishes with repetition. The work from this project described in this paper culminates in a proof-of-concept prototype that bridges the gap between expensive full-body simulators and less expensive but capacity-limited partial simulators. This market research also included a visit to the Rager Clinical Skills Laboratory (RCSL) in downtown Peoria, which uses a full body simulator in addition to some partial simulators. The team's communication with the laboratory director provided a better understanding of the client's needs, as did similar communication with the nursing school students. TECHNICAL APPROACH The team has made the following assumptions to simplify the design of the palpation simulator. The finger size of the student performing the palpation would be considered constant. At the same time, the soft tissue and the anomaly would be individually treated as homogeneous substances for the chosen solution, that is, the density and elasticity of each would remain constant throughout the substance. Two unique design concepts have been thoroughly researched and analyzed to determine the recommended solution. The first type requires the student to feel a simulator or place her finger on the device. The simulator is designed to provide realistic feedback by recreating the resistive force of soft tissue and an abnormality. One of the solutions for this type of simulator is to use pins that would be individually controlled by servomotors. Doing this could cause an anomaly to control the height of each pin. An advantage of this design is that the servomotors are easily accessible and relatively inexpensive. However, one concern is the amount of space this device would take up. Another solution is that of electrorheological (ER) fluids. These fluids are a special type of fluid that, when exposed to electrons, experience a change in viscosity. Fluids can be arranged into individual cells in a grid, allowing the location of the anomaly to be chosen by sending electrical current to any particular cell. While space constraints weren't an issue of concern for this setup, cost might have been. A liter of emergency fluid, the minimum order, can cost between $1,000 and $1,500. Other fluid alternatives that are less expensive were investigated, which in turn led to the choice of magnetorheological (MR) fluids. MR fluids are similar to ER fluids, but the change in viscosity is driven by an applied magnetic field rather than an electrical one. small iron 119

(Video) Free Download any paid book ! Search by ISBN No| 🗒️Free books 2019|read books online free

156filings in the fluid align along the flow lines, causing the viscosity to increase. The second design concept uses haptic feedback to provide a virtual simulation through a device placed on the student's finger. The concept behind this design is to simulate the presence of an entity that does not actually exist. In this way, the device is similar to commercially available haptic devices. Ideally, the device would consist of a finger sleeve lined with sensors and connected to a computer interface, through which the user could program various anomaly cases and locations. FINAL DESIGN After extensive study of the various design options and concepts, the team has decided to proceed with Design Concept 1, ie putting your fingers on the device. The team has also decided to use MR Fluids to create the abnormality to be palpated. Therefore, the three categories of design decisions that must be made are the MR fluid, the magnet, and the fluid packing. The final design will be the proof-of-concept prototype of the palpation simulator capable of producing an anomaly with a modulus of elasticity comparable to that found in human tumors. MR fluids are a type of smart fluid in which the viscosity can be changed by applying variable magnetic fields to the fluid. Generally, these fluids are used in a shear mode, where displacement occurs perpendicular to the flux lines of the magnet which often meet as a damper, brake or clutch. However, this project uses the MR fluid in compression mode, where the compression force is in the same direction as the flux lines coming out of the electromagnet pole. The team chose a water-based MRI fluid because it provides a modulus of elasticity of 300 kPa, which is about the same value as a ductile carcinoma (breast tumor)[5]. The magnet is the driving force of the rigidity that the MR fluid can achieve. MR fluids become saturated when the electromagnetic field strength is between 1.2 and 2.0 Tesla, at which point MR fluids begin to lose their fluid properties and become very hard. The team has chosen to use a permanent magnet to activate the MR fluids. In particular, the prototype employs a 0.58 Tesla permanent magnet below the MR fluid contained in an LDPE package, separated only by a thin aluminum foil. Magnetic modeling has been performed to model the flux lines and flux density for this design, which are shown in Figure 4 and Figure 5, respectively. If a stiffness other than the abnormality is desired, various force magnets can be interchanged to allow for this change. The team also considered the use of electromagnets, but the achievable electromagnetic field strength is not strong enough to cause a noticeable change in the viscosity of the MR fluid. For the packaging of the MR fluid, the team decided that a bladder consisting of compartments, connected by narrow passages, would allow the MR fluid to flow freely in the absence of a magnetic field while providing the realistic sensation of abnormality when a magnetic field is present. applies to appropriate restricted passages. The chosen packing pattern is shown in Figure 6. PROTOTYPE CONSTRUCTION The prototype frame is made of 80/20 aluminum. Two layers of aluminum support the fluid container. The bottom layer is a ¼-thick aluminum plate that supports the fluid packing layer. On this sheet, a pattern is punched out to match the fluid packing cells to allow for magnet placement. This pattern consisted of approx. 40 holes (1 1/8 dia) to allow adequate clearance for 1 dia magnets. Figure 7 shows the machining of these holes. The top layer is a thin sheet of aluminum (0.030 thick) to separate the magnet and fluid packing. Both layers are screwed to a support rail on the frame with the supplied screws. The material chosen to package the MR fluid is a low density polyethylene (LDPE) matrix of approximately 40 cells. The cell design used a bottleneck design that acted as a nozzle to restrict flow. This improves the overall performance of the fluid. A large syringe is used to fill these cells. The final set of MR cells is placed on top of the aluminum supports. Figure 8 shows the filling of the container with MR fluid. The tissue layer is a piece of open cell polyurethane foam to represent the soft tissue. This was simply cut to size, 12 x 12 to fit inside the frame and placed on top of the fluid cells. The final covering or outer layer is a sheet of nylon fabric that has been glued to the foam piece using a heavy-duty spray adhesive. This combination was chosen due to its availability and low cost. Figure 9 shows the packaged MR fluid and the foam fabric layer. PROTOTYPE VALIDATION This project crosses the boundaries of traditional mechanical engineering by addressing a critical training need within the medical profession. Medical professionals have been consulted throughout this project. The student team has given their final presentation to these two medical professionals, as well as a representative from the Kern Entrepreneurship Education Network (KEEN). Medical professionals have tested the final proof-of-concept prototype, as shown in Figure 10. Medical professionals have praised the prototype and have even suggested additional potential applications for this technology. Most importantly, they have agreed that the proposed technology could be a viable and applicable method of medical simulation. CONCLUSIONS This article describes the development of a palpation simulator using magnetorheological (MR) fluids to represent abnormal anatomy to be detected. Design decisions for this palpation simulator included the choice of MR fluid, required magnetic field strength, magnet type, and MR fluid packaging. The proof-of-concept prototype has been built and its potential feasibility for palpation simulation has been validated by local medical experts. This work has been done as a two-semester capstone design project for the Department of Mechanical Engineering at Bradley University. In addition to incorporating typical areas of mechanical engineering, this project also crosses traditional boundaries by addressing a critical training need within the medical profession. A significant and unique feature of this project is its sponsorship by the Kern Entrepreneurship Education Network (KEEN). The extensive 120

157The research and market analysis required by the project has fulfilled KEEN's mission of graduating engineers with the necessary mentality in terms of entrepreneurship and innovation. ACKNOWLEDGMENTS This project has been fully supported by KEEN. The authors wish to thank Dr. Bob Podlasek, Dr. John Engdahl, Dr. Andy Chiou, Dr. David Buchanan, and Dr. Danuta Dynda. The authors would like to give special thanks to the members of the student team: Mr. Tim Myers, Mr. Kurt Friedrich, Mr. Carl Poettker, and Mr. Nate Adams. REFERENCES [1] H.R. Champion and A.G. Gallagher, Surgical Simulation a Good Idea Whose Time Has Come, British Journal of Surgery, vol. 90, number 7, pp. [2] J. Balcombe, Simulation Medical Training: Towards Fewer Animals and Safer Patients, Proceedings of the Fourth ATLA World Congress, 2004, pp. [3] D. Kim, M. Morris, G. Leja, T. Scarlata, and S. Wylie, Dynamic Analysis and Control System Development for a Laboratory Wind Tunnel, Proceedings of the ASME 2000 International Mechanical Engineering Conference and Exposition, Nov. 2000, pp. [4] D. Kim, M. Morris, AM. Moellenberndt, T. Rowlands, and T. Masha, Design of Controller and Data Acquisition System for a Gear Dynamometer, Proceedings of the 2002 ASME International Mechanical Engineering Conference and Exposition, November 2002, pp. [5] J.O. Ladeji-Osias and N.A. Langrana, Analytical Evaluation of Tumors Surrounded by Soft Tissue, Proceedings of the 22nd Annual EMBS International Conference, July 2000, pp Fig. 2 Commercially Available Breast Examination Simulator Fig. 3 Commercially Available Palpation Simulator Fig. 1 Commercially Available Human Breast Simulator complete body Fig. 4 Magnetic flux lines for the final design 121

158Fig. 5 Magnetic flux density for final design Fig. 8 Filling the packing with MR fluid Fig. 6 Packing pattern for MR fluid Fig. 9 Prototype with packed MR fluid and foam layer Fig. 7 Machining of locating holes plate magnets Fig. 10 Prototype Testing by Medical Professionals 122

159Odette Lobato Calleros, Universidad Iberoamericana-Mexico City, Department of Engineering, Extension of Paseo de la Reforma No. 880, Lomas de Santa Fe, Federal District, ZIP, Mexico, Tel: (52-55) ext. 4133; Fax: (52-55) odette.lobato@uia.mx Humberto Rivera; Hugo Serrato; golden Frederick; Christian Leon; Mother. Elena Gomez; Paola Cervantes; Adriana Acevedo Iberoamerican University-Mexico City. Ignacio Méndez Ramírez National Autonomous University of Mexico, Mexico City. ABSTRACT This article presents the methodology used to evaluate the satisfaction of users of Social Programs in Mexico and the results obtained from the pilot test. The causal model, the Mexican User Satisfaction Index (IMSU), is adapted from Mexico's American Customer Satisfaction Index (ACSI). The methodology under development aims to become an alternative for the evaluation and improvement of Social Programs and Policies in Mexico. Keywords: Customer Satisfaction, Social Policies, Government Services, Social Programs, National Satisfaction and Quality Index. 1. INTRODUCTION The work reported in this work is part of the research project: Mexican Customer Satisfaction Index (IMSU). Its objective is to design and implement a methodology for a National Satisfaction Index of Beneficiaries of Social Programs in Mexico. The objective is to arrive at a standardized, comparable and reproducible methodology, based on ACSI but adapted to the conditions of Mexico. This article focuses on the results of the pilot test of the model and its questionnaires. The history of Mexico is characterized by high levels of poverty and persistent inequality. According to Székely (2005), % of the population lived below the poverty line, and today 47% of the population (CONEVAL, 2007) still remains under this classification. For this reason, the Government is obliged to generate a Social Policy that strengthens the social protection of the poorest. One of the mechanisms to implement this Policy is the creation of Social Programs. In this sense, the evaluation of Social Programs is very important. As mentioned in the study by Talukdar et al. (2005), the World Bank recommends that we listen to what the users of these programs have to say about the goods and services they are receiving. The World Bank seeks to incorporate the "voice of the consumer" -consumers are mainly the poor- in its socio-economic development projects. Thus, the World Bank's objective to represent the voice of the poor, the "target market", in its financing for the provision of public goods and services is analogous to the objective of a client-driven company to incorporate the voice of the poor. of clients "their commercialization of private goods" (Talukdar, et al. 2005: 101).In the case of Public Policies, Hastak et al (2001:172) mention that

160the results of a Policy must be evaluated to determine if the Policy is meeting its objectives. The evaluation should also provide feedback so that the Policy can be modified to improve its effectiveness. This is exactly what we intend to do when measuring beneficiary satisfaction with Social Programs in Mexico. This research project seeks to identify opportunities for improvement, which translate into changes in the operation of the programs towards a higher quality of services offered by the Government to the poorest. In addition, this project seeks to contribute to the field of Engineering in Mexico through the development of a theoretical model and a methodology to evaluate the performance of processes and their relative importance from the user's perspective; thus providing Engineers with a tool to improve processes. Social Programs 2. BACKGROUND The "National Development Plan" establishes a series of Social Commitments and Social Policies for the government. The way in which the Public Administration carries out Social Policies is through Social Programs. Social Programs, as defined by Gómez (2004), are "technical services related to specifically identified human needs, which tend to provide care to those groups that, due to their circumstances or conditions, are in a situation of need or marginalization" (Gómez , 2004: 31). Such groups are mainly made up of people living in poverty. All the programs evaluated in this project are in charge of the Social Development Secretariat (SEDESOL). As SEDESOL says, the primary mission of its Social Programs is to “generate equal opportunities for all people, regardless of their place of birth, income, family or sociocultural conditions, so that all citizens have access to essential goods and services. for its development (SEDESOL, 2009). Models of National Satisfaction Indexes At the international level there is a tendency to establish National Satisfaction Indexes. In 1989 the first satisfaction index was born, called the Swedish Customer Satisfaction Barometer (SCSB) (Fornell , 1992 The American Customer Satisfaction Index (ACSI) was introduced in 1994; the Norwegian Customer Satisfaction Barometer was introduced in 1996; and the most recent development is the European Customer Satisfaction Index (ECSI) (Johnson et al. ., 2001).Other countries are also developing national satisfaction indices, including New Zealand, Austria, Korea, Germany, Taiwan, and Hong Kong.American Customer Satisfaction IndexThe American Customer Satisfaction Index (ACSI) is a national indicator that measures the level of satisfaction among Americans with respect to the quality of products. and services consumed. ACSI assesses ten sectors of the US economy, covering 41 industries and more than 200 federal or local government businesses and services. The satisfaction index is obtained from the treatment of the responses of Americans to a telephone questionnaire. The ACSI model for government has been useful for describing government programs and services in the United States and has also been successfully tested in Mexico in previous studies such as the Diconsa Rural Supply Program and the Local Development Program (Microregions) (Lobato et al. al., 2006a; Lobato et al., 2006b). Therefore, the ACSI was selected to be tested and adapted to the Mexican reality in order to create a National Index of Social Programs in Mexico, the IMSU. This model is the one that has been used in the seven Social Programs that have been evaluated during this project. 124

161Q1 Activity 1 Q2 Q3 Activity 2 Q4 Q5 Q6 Activity 3 Perceived quality Customer satisfaction (ACSI) Customer complaints Customer expectations Satisfaction Comparison with the ideal Confirm/disconfirm expectations Advocacy User confidence Trust Figure 1. ACSI model: government services and organizations non profit. Source: ACSI Methodology Report (2005) 3. METHODOLOGY The IMSU Board of Directors is an interdisciplinary group made up of Quality Engineers, Statisticians and Social Scientists. Through a series of pilot tests, the board has adapted the ACSI model to evaluate seven Social Programs in Mexico. We will briefly present some definitions and basic properties of the ACSI model. ACSI Model ACSI uses an econometric model that measures several indicators that make up a satisfaction index as well as other indicators related to latent variables or constructs. Customer opinions are collected through a survey and the data is analyzed to obtain a description model using the Partial Least Squares (PLS) method. The latter is an iterative procedure that integrates aspects of principal component analysis with multiple regression. What we measure are the manifest variables (survey questions), and through the model we find the value of the latent variables (including satisfaction), because the ACSI model is a system of cause and effect relationships (see Figure 1). In the ACSI governance model, Customer satisfaction has two antecedents: "Perceived Quality" and "Customer Expectations". The Perceived Quality construct has inputs that must be determined for each case study and correspond to those Program Processes where the user has direct contact with the operation (components on the left side of the model). The consequences of satisfaction, according to the model, are: "Customer complaints" and "User trust". Satisfaction itself is a latent variable (central box in the model in Figure 1), measured through multiple manifest variables, which are the questions that make up the satisfaction survey. The index produces results on a scale. One of the main objectives is to estimate the effect of ACSI on user loyalty and trust, a construct of universal importance in evaluating current and future business performance (for more details on the ACSI model see ACSI Methodological Report, 2005). . One of the main advantages of this model is that it not only estimates customer satisfaction, but also identifies the impact of each process experienced by the customer 125

162in perceived quality. That is to say. presents a process analysis, where we can identify which of them need to be improved and which would give a better result, thus allowing a targeted investment. The Federal Programs evaluated during the pilot tests included: 1. Milk Social Supply Program, powdered milk modality. Subjects: holders of the identity cards of the program. 2. Subsidized Liquid Milk Program. Subjects: holders of the identity cards of the program. 3. Day Care Program Support for parents Subjects: working mothers and single fathers who are beneficiaries of the benefit. 4. Program of Day Care Centers to support owners. Subjects: owners of the centers that received support for the opening or remodeling of the centers. 5. Cash transfers from seniors over 70 years of age. Subjects: Older Adults older than Municipal Infrastructure in priority areas. Subjects: Municipal officials in charge of infrastructure projects. 7. Concrete Floor Program. Subjects: Residents of homes with concrete floors. A user satisfaction causal model was developed for each program. The following paragraphs describe the general procedure for the design and pilot testing of the seven satisfaction models. Qualitative study. Analysis of existing information on each program and its operating rules for a preliminary identification of key processes and main users. Extensive group interviews with program administrators were an important component of this step. Field trips. Also as part of the qualitative study, these trips consisted mainly of observing the delivery of benefits to the population, in-depth interviews with the beneficiaries (to find out which processes are key to them and learn about their lexical uses) and interviews with in depth with local program administrators. Design of the causal model. The processes that are most likely to drive user satisfaction were identified and grouped into no more than four dimensions. Questionnaire design. The instruments are composed of a set of homogeneous items for user expectations, perceived quality, and user satisfaction, as well as items that measure the drivers of satisfaction, which are different depending on the characteristics of the program. Pilot test. The questionnaires were tested on a small convenience sample with similarities to the population of each program. The pilot test was intended to test the causal model, the logistics of the fieldwork, and the interview procedure. Possible sources of variation in responses were sought. ACSI software was used to estimate the models: satisfaction rates and significant relationships. The results up to this point are reported in the rest of this work. The next steps in this research project (ongoing as of this writing) are to: Conduct interviews on a national scale for each of the seven programmes. Estimation of the final causal models. Analysis and interpretation of results. Identification of improvement opportunities. 4. RESULTS We will use the Subsidized Powdered Milk Program to illustrate the results obtained from the pilot tests. The evaluation model is shown in Figure 2. Our first observation is that the main causal relationships proposed by the ACSI model are confirmed for this Program: "Perceived Quality" has a significant impact (2,599) on "Customer Satisfaction", and "Customer Satisfaction" also has a significant 126

163impact (1,873) in "Trust". The program obtained a satisfaction rating of 79 on a scale of 0 to 100, with a margin of error of +/- 2.5, with a confidence level of 95%. Cronbach's alpha obtained for the instrument was 0.854, being acceptable. The four components that affect the perceived quality of this Program are: Access to the Program, Product, Point of Sale and Customer Service. Within each component, a group of manifest variables were identified. The component with the greatest impact on perceived quality was "Customer Service" (2,691), which received a score of 93. This is where improvements could be implemented as a priority. Within this component, the variable that obtained the lowest rating was "Fair delivery" with a rating of 77. Here, users have the perception that some people receive more milk than their share, so this it could be the variable we want to start the improvements on. The second component with the greatest impact was "Point of Sale", which received a score of 86, and within this variable the lowest score comes from "Supply" with a score of 71.9, being another area susceptible to improvement. “Program Access has the third most significant impact on perceived quality, scoring 72. The lowest rated manifest variable within this component—and within the entire model—was “Response Time” (70, 8). However, improvements should not start here, because the impact of this component on quality is not as strong as the impact that comes from Customer Service (where improvements should start, as mentioned above). The Product component does not have a significant impact on perceived quality, which could confirm the Tendency that the service is essential to satisfy people, even when a product is mainly being provided Clarity of information (72.6) Justice in admission decision (73) Response time (70.8) Access to the program General complaint Behavior Nutrition (91.3) Price (81.8) Taste ( 92.3) Supply (71.9) Time (86.9 ) Cleanliness (90.5) Impartiality in delivery (77) Conditioning (97.1) Friendliness (92.7) Product Point of sale Customer service Perceived quality General customer expectations Satisfaction (81.5) Confirm/Disconfirm expectations ( 79.9) Comparison to Ideal (72.2) Customer Satisfaction (IMSU) Customer Complaints Overall User Trust Product Loyalty (89.1) Quality-Based Loyalty (89.3) Referrals (91.6) Figure 2: Evaluation Model for the Subsidized Powdered Milk Program. 5. CONCLUSIONS The client satisfaction evaluation model used in this paper, based on the ACSI model, is presented as a good alternative for evaluating the mechanisms that affect the government's Social Policy; In this case, social programs continue to be a way for citizens to give their opinion on whether public policies 127

164implemented by their governments meet the objectives for which they were created and meet the needs of the population. The evaluation model presented allows users to provide feedback on the processes studied. This could lead to specific changes in the variables where improvement needs were identified, which would result in an increase in the level of satisfaction of the beneficiaries of these Programs. The results presented here correspond to those obtained in the pilot test. We are currently working on the national evaluation, which will yield the final results of this project. 6. ACKNOWLEDGMENTS This research project is financed by the National Council of Science and Technology of Mexico (CONACYT). We appreciate the collaboration of the members of the IMSU board of directors: Dr. Ignacio Méndez (UNAM), Dr. Alexander von Eye (Michigan State University), Dra. Graciela Gonzalez (CIMAT), Act. Alfredo Ramírez (CIDE), and Graciela Teruel ( Ibeoamerican University). 7. REFERENCES American Customer Satisfaction Index (ACSI) (2005). Methodological Report. The Regents of the University of Michigan. April, United States. National Council for the Evaluation of Social Policy. Executive Report on Poverty. Mexico, (Retrieved May 2010). From: pdf Fornell, Claes (1992). A national barometer of customer satisfaction: the Swedish experience. Marketing Magazine, Vol. 56, January, No. 1. Gómez Serra, Miquel (2004). Evaluation of social services. Barcelona; Gedisa. Hastak, Manoj; Mazis, Michael; Morris, Louis (2001). The role of consumer surveys in public policy decision making. Public Policy and Marketing Magazine. vol. 20, USA, Johnson, Michael D., Gustafsson, Anders; Andreassen, Tor, Wallin; Lervik, Line; Cha, Jaesung (2001). "The Evolution and Future of the National Models of Customer Satisfaction Indexes". Journal of Economic Psychology, No. 22. Lobato, Odette; Serrato, Hugo and Rivera, Humberto. (2006a) Final Version of the Report on the Application of the Methodology for Obtaining the Satisfaction Index of the Beneficiary of the Local Development Program. Mexico DF. Lobato, Odette; Serrato, Hugo and Rivera, Humberto. (2006b) Final Version of the Report on the Application of the Methodology for Obtaining the Satisfaction Index of the Beneficiary of the Diconsa Social Supply Program. Mexico DF. Ministry of Social Development. (SEDESOL). Strategic Objectives of Social Development [Retrieved November 2009]. From: bjetivos_estrategicos_desarrollo_social.pdf Székely, Miguel (2005). Demystification and new myths about poverty: listening to what the poor say". Social Development Secretariat; CIESAS; ANUIES. Mexico, D.F. Talukdar, Debabrata; Gulyani, Sumila; Salmen, Lawrence F. (2005). Customer Orientation in the Context of Development Projects: Insights from the World Bank, Journal of Public Policy & Marketing, Vol. of an Evaluation Model of the Social Supply Program of Leche Liconsa. Thesis to opt for a master's degree in Quality Engineering from the Universidad Iberoamericana.128

165The challenge between traditional and environmental aspects facing modern architectural design, a case study. Mahmoud Tarek, M. Hammad. Al Azhar University, Faculty of Engineering, Department of Architecture. Nasr city, Cairo, Egypt. ABSTRACT The modern designed buildings in the Mamiluk and Fatimid part of Cairo represent a strange sight to the mere observer and relevant visitor. This part of Cairo displays a number of historical cultures exhibiting the sequence of very prominent and important episodes in the Islamic history of Egypt. . The Children's Cancer Hospital near the Nile River in Cairo is a world-class pediatric center dedicated to caring for children with cancer, while in the heart of Islamic Cairo its contours are based more on reform than tradition. The designer inspired the boats on the river for its structure. How this design has served the purpose and functions of the building and the compatibility with the Islamic culture of the environment is the focus of this article. The tools used by the author include an analysis of the plans from the ecological and environmental point of view of the site. A comparison study was carried out for other new buildings in the area, namely Dar El Eftaa and Mashiakhet Al Azhar, where some traditional Islamic considerations were adopted. The challenge is how these traditional contours can best fit the function of the building and the facilities it is intended to offer to patients from one side, and how the new building has met the requirements of the traditional environment. The hospital and the other two establishments were analyzed throughout the data collection, including plans and elevations, design elements and architectural treatments that achieve ventilation and thermal balance. Other elements such as aesthetic considerations, facades and ornaments were also analyzed. The results show an increasing use of glass in the hospital that does not offer sufficient thermal insulation or reasonable lighting. Internal spaces were not used efficiently. The site and the available area are quite limited, so the waste of space is critical. The design of this building was based on its position near the Nile River in an area of ​​Islamic traditions which gives it an odd look. The other two establishments, although with completely different functions, their design considered and recognized environmental requirements and historical background Keywords: architectural design, historical landscape, historical cultures, traditional contours and environmental requirements 1. INTRODUCTION The history of architecture in Egypt It is showing a variety of civilizations and cultures of the peoples of this region. It shows times of great supremacy and sharp declines of dynasties that ruled throughout a long history from ancient Egypt to modern times. The people of Egypt adopted Islam early; only 28 years after the descent of Islam on the Prophet Muhammad. Since then, Islamic architecture in general, and especially in Egypt, has reacted and modified earlier famous architectural styles of Greek, Byzantine, Persian, and Roman styles. Islamic influence, on the other hand, was an important factor contributing to the development of architecture in Andalusia and throughout Europe during the Renaissance era. Islamic architecture, in essence, encompassed a wide range of styles, both secular and religious. Cairo, as the capital of Islamic Egypt, shows the dynasties of a long Islamic history since its inception. Several neighborhoods are distinguished by their dynastic style. The preservation of this heritage in such a historic city is therefore of vital importance not only for the Egyptians but also for humanity in general. Although this principle is violated at many times in various districts; the survival of many structures remain a human wealth. The foundation of the Children's Cancer Hospital and many other institutions is simply a violation of the principle of preservation of human heritage and its environment. This study is comparative and analytical in its purpose of showing the degrees of contradiction with the dominant Islamic styles in the region. 2. ENVIRONMENTAL BACKGROUND The origin of the ancient city of Cairo, the core and the flanks are a function of the reaction of various elements, including environmental, religious, economic, social and political. One of the most important aspects that specialize and characterize the region is the Islamic culture with its social and traditional characteristics. The Islamic religion, in addition to its nature of relationship between God and the individual, is forming a way of life with its special respect for others and the conservative attitudes of families. These characteristics are reflected in the dominant types of architecture. The location of Egypt in general and Cairo in particular, under arid and hot conditions, has greatly affected the urban and architectural pattern. This pattern is obvious by adopting some elements like thick walls, Malkafs, domes, courtyards, Mashrabia and the urban fabric including organic system. 129

1663.HISTORICAL BACKGROUND Fatimid Dynasty The city of Cairo had witnessed a sequence of historical events ever since the Fatimid commander Jawhar al Siqilli (a former Sicilian slave) established it as a new neighborhood ( ) in Fostat (the capital of Egypt since the Islamic conquest by Amr Ibn El Ass in 640, which was founded next to the fortress of Babylon). During the Fatimid dynasty, several magnificent buildings were founded, including the Al Azhar Mosque, a famous mosque and at the same time the oldest university teaching the Islamic faith in its various sects. Other surviving Fatimid structures include the Al Aqmar Mosque (1125), as well as the monumental gates in the city walls of Cairo commissioned by the powerful Fatimid vizer emir Badr Al jamali ( ) (CPAS 1992). In addition to these elegant constructions, elaborate funerary monuments were founded. . The houses were simple and closed, characterized by open courtyards and non-straight entrances. Fountains were also erected in the courtyards along with the use of Malkaf (air capture unit) and Mashrabia. It should be said that since the Fatimids came to power, the city expanded gradually, subsequently exhibiting a special architectural style with each era. All of which have their traces of Islamic art. This art in general is strongly influenced by Islamic faith and traditions and is in harmony with climatic and environmental aspects. Elements of Islamic architecture facilitated codes of conduct within the multiple and historical contexts of the Islamic world. Mamluk Dynasty The Mamluk dynasty began at the end of Saladin's Ayyubid dynasty ( ) after his scandalous triumphs over the many Crusader campaigns. Religious concepts offered them generous patterns of architecture and art with majestic domes, courtyards, and towering minarets stretching throughout the city. The decorative arts of Mamluk architecture, including enameled and gilded glass, inlaid metalwork, woodwork, and textiles, flourished under his rule and had a profound impact and influence throughout the Mediterranean, both in the north (Europe) and in the south (north coast of Africa). Distinguished Mamiluk rulers established patronage of public and pious foundations including madarases (schools), mausoleums, minarets, and pemarestans (hospitals) (cpas 1992). 4. ELEMENTS OF THE ARCHITECTURAL STYLE OF THE FATIMID AND MAMILUK DAYS The Islamic architectural style of the Fatimid and Mamiluk dynasties can be identified with the following elements. (Fig 1) - Minarates as Towers, and Mihrab indicating qibla. -Sahn (yard). -Central Sources (Maida) used for ablutions. -Iwan to mediate between different sections -Domes, Vaults, Moqarnas and Arches. -The use of geometric shapes and repetitive art (Arabesque) Fig.1. Photo (1) shows the use of ornaments, Photo (2) shows two vaulted iwans and a fountain in the middle of the sahn, Photo (3) shows a sahn as an aesthetic and climatic element of the Islamic style, photos (4) , 6) show the use of calligraphy, photo (5) shows an arched window style and moqarnasat. Photo (7) shows the use of arabesque decoration, Photo (8) shows the bright ornaments of the Mihrab, 5. COMPARATIVE ANALYTICAL STUDY OF THE THREE MODERN INSTITUTIONS. Oncology Hospital The location of the Hospital: The hospital is located a short distance from Cairo Fostat to the west of the wall of Magra El Oyoun (Fortification of Saladin) in the center of a random urban district. The construction and surrounding area is about 10 thousand square meters. Three entrances lead to the main building. The site was also surrounded primarily in the foreground by tracts of green loans and parking lots. To establish a better environment, several blocks of unplanned (planned at random) buildings and houses were demolished. However, the neighboring areas consist of slaughterhouses and their relevant industries that form a source of serious pollution (Fig. 2). 130

167Fig. 3. Design elements of the hospital building on the ground and first floors. (TCHP 2002)) Fig.2. A satellite image showing the hospital site plan and earlier shots 1-4 show the surrounding streets and buildings. Source: (Google 2008) Elements and components of the hospital: The hospital is an eight-story building with a total area of ​​10,000m2. Clinics, emergency and reception constitute the ground floor. These units are accessed through separate entrances from the main street. Other medical departments were distributed on the upper floors according to their functions. The hospital has modern electromechanical systems including lighting, air conditioning, computer network and an efficient system for water treatment and waste disposal, in addition to highly sophisticated medical equipment (Fig. 3) Conceptual Design: The architectural concept adopted is to build a building with integrated and functional facilities. The core is originally a Falluka (boot) shaped block with its sails inspired by its location near the Nile River. The concept achieved the optimum level of service efficiency but failed to be in harmony with the prevailing Islamic style that codifies the region. The designer (jonathan bailey) created an architecture construction that is quite foreign to the environment. In fact, the establishment came into being devoid of any Islamic elements and the main building is a mere block of Western design. The Islamic style, however, reveals a dynamic relationship between blocks and spaces. Introducing Islamic elements into a modern hospital is a real challenge for the designer and could be inspired by the surrounding Islamic architecture. Mashiakhet Al Azhar Complex and Dar El Eftaa This modern complex is replacing the old separate institutions of both buildings. These two institutions are under the control of the Al Azhar establishment, the first (Mashiakhat Al azhar) is the headquarters of the offices of the great sheikhs (imams) and the large number of specialized centers for research, publications, dissemination of the faith and international relations. The other (Dar El Efta) is formally recognized as the sole source of fatwas (interpretations of Islamic laws). The two institutions are occupying old buildings, generally lacking sufficient space and proper façades. These are the main reasons for having another site with enough spaces for various expanded installations. The location of Mashiakhet Al Azhar and Dar El Eftaa: The site was chosen for the new structures in the Fatimid region of Cairo a short distance from the Al Azhar Mosque (the famous Fatimid mosque in Cairo), on a clearly elevated hill that Surrounding streets and cemeteries at the junction of Salah Salim and Al Azhar street. The surrounding Islamic environment has the main impact on the planning and architectural design of both buildings. The complex is forming an engineering architectural model that integrates all elements of Islamic architectural style and art while considering the overall historical Islamic landscape (Fig. 4). 131

168The eastern main entrance is designed to lead to the main façade. The entrances for officers and officials are located on the side facades. (Fig. 5 Elements of Dar El Eftaa: This is essentially an administrative building, where consulting services are offered to citizens and authorities. Its area is approximately 2000 m 2 and it is made up of five floors. The administrative offices are accommodated according to with the sequence of functions.The inner courtyard is used as a prayer yard surrounded by side courtyards.The office of Mufti (Sheikh of Azharia) is occupying a special and central place on the main façade looking outwards by a magnificent "Mashrabia " (a famous Mamiluk architectural element depicting a modern balcony), (Fig. 6). Fig. 6. Design elements of the El Eftaa building on the first and second floors. (DEE 1997) Fig. 4. A satellite image showing the site plan of the Mashiakhet Al Azhar and Dar El Eftaa complex (Google 2008), and ground shots 1-4 show the surrounding streets and buildings Elements of Mashiakhet Al Azhar: The building is a general administrative construction occupying an area of 6000 m 2 in a lot of m 2. it consists of 8 floors to accommodate the different administrations according to the program of utilitarian needs and functions. These Conceptual Design: The conceptual design of these two buildings is compatible with the Islamic heritage of Mamiluk Cairo. The concept adopted by the designer (abdtc, local office) relied on achieving an Islamic style with a modern and contemporary spirit, while integrating with the many surrounding Islamic buildings and historical landscape, incorporating the common elements of the nearby Fatimid and Mamiluk . monuments 6. COMPARATIVE ANALYTICAL STUDY OF ARCHITECTURAL FORMS AND ELEMENTS IN THE THREE INSTITUTIONS. The study revealed the following results; External facades All the external facades of the hospital are mainly made of glass, which gives an impression of transparency from a distance that strongly contradicts the Islamic design concept of external facades. This one has fewer openings facing the outside, while the main and important openings face the internal courtyards, which is achieving the important Islamic principle of privacy. The design of Mashiakhat Al Azhar and Dar El Eftaa beautifully presented this earlier element (Fig. 7). Figure 5. Design elements of the Al Mashiakhet building on the ground and first floors. (AMP 19 administrative floors are connected to a central octagonal shape in which the office of the great Imam is located. The 132

169modernized in a sense that does not lose its elemental essence (Fig 8). Figure 8. Photo 1 shows the effect of Mashrabiat on the internal façade to reduce glare and heat transmission inside the building. Photo 2 illustrates the glare resulting from the extensive use of wide glass openings in patient rooms and how shades are used to solve this problem. Fig.7.Terrestrial photographs showing the exterior façades of the three institutions. The glass facades of the hospital, although double-layered, did not really achieve shaded and/or conditioned interiors. Curtains and central air conditions were widely used. To achieve shadows and light refraction, metal structures were used in many parts of the front facades. These structures, which include candle shapes in some parts, are quite far removed from any Islamic style in the surrounding area. Fewer openings of limited lengths on the front facades of both Islamic constructions, and the use of Islamic treatments such as Mashrabiat, ornaments, colored material finishes and other elements, all were in harmony with the Islamic architectural style. Openings Due to the large opening surfaces in the hospital façades, sunlight and glare became a problem during the day for the large spaces of the structure. To overcome this obstacle, heavy curtains and double glass sheets with argon gas in between to filter harmful sun rays are essential in cancer treatments for children undergoing radiation exposure and chemical treatments. The use of wide glass planes did not result in successful thermal insulation. For this reason, the hospital is using mechanical and electrical means at all times to achieve the above objectives. The openings in the external facades of Mashiakhat and Dar El Eftaa are few and in many cases covered with decorative wooden arabesques to mitigate light intensity and provide shade and moisture conservation. The design of openings adopted in the two Islamic institutions was Entrances Islamic architecture has a common characteristic style, as for the main entrances which are generally almost the same height of the building or the first floor, these are strong and confirming. In the Islamic compound of Al Azhar, the two institutions are designed with this style of accesses on the fronts that face the main streets. All the accesses are evident, strong and lintelled with a pointed arch. In the hospital, metallic structures were used to delimit and confirm the main accesses. Other entrances are usually simple with glass front and middle doors. Elements of structure form (columns, domes and arches). The Islamic architecture of the neighborhood has known how to embrace important and characteristic structural elements. Columns are one of these items and were carried in the early decades of Islam from churches and temples (Abd El Gawad 1987). In later periods, the columns were modified, especially during the Mamiluk dynasty, exhibiting elaborate shapes. The designer of those Islamic institutions inspired different shapes of columns by using modern materials for cladding such as marbles and some other manufactured materials. The dome element was successfully used to cover most of the main building. Islamic style Aqoud (Arches) with scales similar to those used in the Fatimid and Mamluk dynasties with some modifications to give a contemporary style to the construction were also used (Fig. 9). In the hospital building, the designer adopted a different concept and culture, using, for example, a spherical shape made of glass and metal structures. He used these forms as building forms and as decorative patterns, which are adversely compatible with the surrounding styles. 133

170the architectural elements (arches, openings, ornaments and calligraphy) adopted in new scales that recognize the function without wasting the essence. The shapes of the hospital, on the other hand, show blocks in symbolic shapes and use glass facades that inspire transparency and behavior with the outside. This approach is in contradiction with Islamic architectural principles. Covering the front with a metal sail-like structure gave an impression of harmony with its location near the Nile River but not compatible with the surrounding environment (Fig. 10). Figure 9. Photos 1 and 2 illustrate some form elements (columns, domes, arches) in the Al Azhar complex inspired and developed from Islamic prototypes as shown in drawing 3. (Abd El Gawad 1987) Ornaments and Mouldings The ornamental forms and moldings in Islamic Mamiluk architecture are quite different from those of the Greek or Byzantine style. They reflect the Islamic culture and spirit. The decorative principles rest on the basic foundations of calligraphy, geometry, repetition and multiplication (Clevenot, D, and Degeorge, G, 2000). Using a variety of decorative elements, the spaces are articulated. Famous ornamental patterns were used in the Islamic complex of Al Azhar. They are used to decorate front facades, entrances and some other surfaces (especially Mashrabiat). In fact, this is not intended to satisfy utilitarian needs, but rather to give a spirit in harmony with the surrounding buildings. In other words, to define the regional identity relevant to the eternal principles of Islam. As for the functions of the hospital, it is basically necessary. The use of Islamic structures and forms would be a challenge for the designers. Local designers and architects are encouraged to play this role successfully. The two Islamic buildings in the Al Azhar compound are examples of having modern buildings with an Islamic spirit and culture. Unfortunately, this Islamic approach was not adopted in the design of the hospital. All elements of protection against contamination or clinical regulations could be maintained with an Islamic external identity. Architectural Forms The architectural forms established in the Mashiakhat Al Azhar and Dar El Eftaa buildings consist of a large-scale strong block balanced with the central axis of the inner courtyard following important principles of Islamic architecture. The balance and symmetry around an axis is obviously noticeable in the horizontal elevations and the plans of the two Islamic buildings. There is also an efficient use of Islamic decorative values, resting in unity and harmony with the whole Fig.10. Black and white drawings of Al Mashikhat and El Eftaa showing balance and symmetry around the axis of the courtyard. The other blue drawings show the symbolic form of the Hospital which is not in harmony with the Islamic forms. Internal Facades The design of Islamic institutions depended on the decoration of the external façades with Islamic elements, also repeating itself on the internal façades using excessively calligraphic patterns. Mashrabiat and wooden arabesques are used to decorate large openings in the different facades. Marbles with geometric patterns and different colors are used on floors and walls. The ceilings are decorated with geometric units and colored finishing materials. Carpets and furniture decorated with repeating Islamic patterns were spread out in the courtyards and reception rooms (Fig. 11). In the interiors of the hospital, imported materials are used for wall and floor coverings. Finishing materials used to cover walls are chemically treated products to resist bacteria and microbes. The use of these materials and treatments could be used in the interiors of Islamic buildings without contradiction and could be crafted to be in harmony with the Islamic spirit (Fig. 12). Figure 11. Photos from Al Mashikha illustrate Islamic ornamental patterns used on internal walls and ceilings. On the floors you can see marbles with geometric patterns. 134

1719. REFERENCES Fig.12. Photos of the Hospital illustrate the colorful chemical finish materials and finishes that resist bacteria and microbes, as well as the colorful imported terrazzo flooring. 7. RESULTS AND DISCUSSIONS The site and design of the two buildings, Mashikha and dar El Eftaa of the Al Azhar complex have achieved complete harmony with the surrounding environment. The Hospital, on the other hand, can be quite adjusted to your needs and utilitarian functions, but it lacks an overall traditional and environmental balance with the entire district. However, the functional and traditional aspects could be successfully fulfilled at the same time. It is the work and art of the designers to develop certain Islamic elements to fit the purpose of the building. The reaction of local designers with traditional and environmental demands is noticeable in many other establishments. The location of the hospital is another failed option. It could be located in the sprawling districts of Cairo. If there is some need to be present in this area; It must be compatible with your environment. The location of such a critical and highly sophisticated institution largely contradicts environmental and traditional requirements. [1] Abd El Gawad.A.T, 1987, Islamic Architecture (in Arabic), Anglo Library, Egypt. [2] John D. Hoaa, 1977, Islamic Architecture, Harry n. Abrams, Inc, Publishers, New York. [3] Center for Planning and Architectural Studies, 1992, Principles of Architectural Design and Urban Planning During Different Islamic Eras, KSA. [4] Clevenot. D, and. G. Degeorge, 2000, Ornaments and Decoration in Islamic Architecture, Thomas Hudson ltd, London, and the vendom Press, New York. [5] Amin.M.M, Ibrahim.A.L, 1990, Architectural Terms in Mamiluk Documents, American University of Cairo, Egypt. [6] Steele James, 1994, Architecture for Islamic Societies today, Academy Group Ltd, Spain. [7] Hill.D, Grabar.O, Islamic Architecture and its Decoration, 1964, Faber and Faber, London. Rice.T.D, Islamic Art, 1975, Thomes and Hudson, USA [8] Mohamed, M, S, 1971, Egypt Mosques (in Arabic), Al Ahram Commercial, Egypt. [9] Mostafa.l.S, 1975, The Architectural Heritage In Egypt, Beirut print, Beirut. [10] The Children Hospital Project, 2002, Tasmeem Journal Magazine, Egypt. [11] Al Mashiakhat Project, 1999, Alam Al Bena Journal Magazine, Egypt. [12] Dar El Eftaa Project, 1997, Alam Al Bena Journal Magazine, Egypt. [13] [14] 8. RECOMMENDATIONS A complete compatibility with environmental and traditional aspects is a challenge for the designer who also strives for the application of technological advances in the relevant fields. The contradiction between the two cultures, the traditional and the contemporary, would lead the architect to neglect one for the other; the site in this case would dictate the decision. Environmental and traditional aspects must be carefully considered when implementing large and important projects in the city. The different designs of a project are best shown to the public for discussion and evaluation. Local architects are called for their role and duty in harmonizing between technological and traditional requirements. The advantage of foreign experienced cooperation and assistance is sought in the application of modern technology in an integrated system framework, meeting both traditional and environmental needs. Architecture competitions in large projects of national interest must necessarily be organized for this purpose. 135

172Building a Reconfigurable System with Integrated Meta-Model Ashirul MUBIN The Graduate School, The University of Alabama Tuscaloosa, AL 35487, USA and Zuqun LUO Information Systems, Statistics and Management Science, The University of Alabama Tuscaloosa, AL 35487, USA ABSTRACT In development In system activities, the usual processes of requirements analysis, design, development, implementation, and testing for a target system are brought to production. However, soon after its implementation, a number of unforeseen problems, which went unnoticed at the time of the initial requirements analyses, begin to emerge as new system specifications. Although not impossible, it is very expensive to incorporate such additions into a system that is already in production. Therefore, the value of this system will decrease along with the satisfaction levels of its users; and possibly with higher dropout rates. To overcome this, we propose a strategy of parallel development of a metamodel, as a counterpart to the target system under development. In this paper, we present a preliminary design of an embedded meta-model that provides more control over dynamic system configuration and effectively addresses newly emerging requirements for building reconfigurable systems. This additional power to manage the system effectively is crucial to extending its life cycle. As a result of being able to efficiently apply new system changes, we see a significantly higher level of user satisfaction with increased utilization of various system features. Keywords: metamodel, receiver, probe, knowledge base, metabase, parser. 1. INTRODUCTION Even after careful and systematic development of a system, it may still need further modification after its implementation in your operating environment. This is because, once a system is used for services, it encounters different types of users with different levels of satisfaction [5] and needs at specific points in the system's workflow during its operations; therefore, it is more likely that new unforeseen problems will eventually arise. Therefore, over a period of time, the system will need a complete overhaul or complete system replacement. But both options are very expensive and time consuming, as the entire system will need to be redesigned with new sets of accumulated requirements. Therefore, it is difficult to achieve a higher continuous level of end-user satisfaction throughout the life cycle of the system. However, this highly desirable feature can be achieved to some degree by capturing trends of system usage patterns, regularly identifying newly emerging requirements and, if possible, applying these new specifications to the system. Our goal of this work is to build a parallel system that develops concurrently with the target system to track changing user behavior and generate new system specifications to reconfigure the target system to meet your current or future needs. Setting up this parallel system, called a metamodel, helps to further extend the life cycle of the system by making it more useful for end users to serve for a long period of time [2] without any major revision. 2. SPECIFYING A META MODEL A meta model can be viewed as a system envelope around its target system, as shown in Figure 1. Within the context of the systems development perspective, one meta model can be defined as the structural specifications of another. system. These specifications can be adjusted based on feedback from your operating environment, system usage pattern, and the target system itself. Please provide feedback and new requests Services offered by Impact on System Environment Meta-Model (Wrapper System) Target System Get current system. State Reset Sys. State Operating Environment Figure 1. Role of a metamodel The role of a metamodel is vital, in terms of gathering feedback, capturing newly emerged specifications from various sources, and utilizing system usage data. This configuration helps to formulate the state of the newly tuned system, which is made up of 136

173the specifications of objects and processes [1]. Such a directed formulation becomes more mature over time, as analyzes of previous experiences are applied at each iteration during the update of system states. The metamodel is a system in itself. Therefore, you will also need your own specific system requirements, analysis, design, and implementation phases. However, since your objects and processes are, in fact, the blueprints of the target system's objects and processes, your development activities will continue in parallel with the development of the target system. Components of a Meta-Model A carefully designed system is composed of objects, processes and their significant interconnections, which can be built through the Object-Process Methodology [1,2]. To gain programmatic control over these constituent parts of a system, we derive parallel sets of meta-objects and meta-processes built from the system. The meta-objects and meta-processes contain the detailed specifications of the current state of the system and reflect the dynamic behavior of the system during an instant of time. Table 1 describes the main components of a metamodel. Component Meta-Object Meta-Process Meta-base Knowledge Base Analyzer System Probe System Receiver Table 1. Components of the metamodel Functional role Contains detailed specifications of its counterpart systemobject. It can represent a set of possible resettable parameter values. Contains detailed specifications of your counterpart system process. It can represent a set of rules or resettable conditions applied to meta-objects. A repository in a relational database to record accumulated system configuration values ​​over time. A collection of historical data on system usage, activity logs, newly emerged requirements, system configuration at each iteration, etc. The output of the Analyzer and the changes in the Meta-base enrich the experiences in the Knowledge Base. It analyzes the current system state and behavior against the configuration data recorded in the knowledge base (KB) and generates instructions for new specifications for the metabase entry. Means to collect current system status and usage data. Means to incorporate newly generated system configurations into the system. As mentioned in Table 1, we present two sets of instruments: probes and receivers to integrate the metamodel with its target system. The purpose of a probe is to capture system usage patterns, rank features by popularity, collect indirect feedback, and infer any suggested updates at various points in the workflow while using the system. A listener provides mechanisms to dynamically apply (transport) new changes to respective system objects and processes from their equivalent meta-objects and meta-processes. Table 2 lists some of the key properties of these two components. Table 2: Properties of a probe and a receiver Properties of a probe (P-values) Probe_Info Probe_HostInfo Probe_UserInfo Probe_Location Probe_FeatureInfo Probe_ReadSysState Probe_WriteKB Probe_SetActivity Probe_GetActivity Probe_Timestamp Probe_Log Properties of a receiver (R-values) Receiver_Info Receiver_ HostInfo Receiver_UserInfo Receiver_Lo cation Receiver_FeatureInfo Receiver_Read_MetaObjSpec Receiver_Read_MetaProcSpec Receiver_Write_ObjSpec Receiver_Write_ProcSpec Receiver_SetActivity Receiver_GetActivity Receiver_Timestamp Receiver_Log Both the probe and the receiver contain similar data sets and operations, except for the opposite communications between the target system and the metamodel. 3. EXTENDED SYSTEM ARCHITECTURE With the embedded meta-model, the target system architecture is extended to address additional functionality needed to capture dynamic system usage data, change user behaviors, and apply new specifications while the system is in production . Figure 2 details the architecture of the extended system with system probes and receivers as the two interface components. Organization of Architectural Components Every system is surrounded by its operating environment, where users and clients consume the services provided by the system. Thus, a system has an impact of its resulting services on people and their environment. This triggers external influences on the system over a period of time. By capturing these influences, we get a partial picture of the newly emerged system requirements. Probes continue to capture system usage activities based on the current state of the system, updating all of its P-values. The underlying knowledge base continues to record all of these activities, along with associated settings data in the metastructure via the metabase. The parser, between the two databases, attempts to parse the change in the configuration data provided by KB and formulate new configuration specifications with new sets of R values ​​saved in the metabase. The receivers apply their respective R values ​​to the system for reconfiguration. OPERATIONAL ENVIRONMENT Reconfigure with new specifications Meta-Structure Meta-Obj Create/update new specifications Target system System state: objects and processes Meta-base Receivers Meta-Proc Probes System state/behavior Save reconfiguration experience from this iteration Enrichment Suggested change Knowledge Base Analyzer External Influences Newly Emerged Requirements Config. data META-MODEL Figure 2. Nested iterations of the proposed 3-level iterative architecture are needed. The outer primary iteration is the cycle started from system probes to the knowledge base, then to the metabase via the parser, and then back to the system receivers via the metastructure. The other two iterations are passed through the metabase and parser back to the knowledge base. Together, 137

174these three iterations reconfigure the system's metastructure, objects, and processes, and eventually update the knowledge base by recording the history of successful iterations. Knowledge Base Enrichment A meta-model properly integrated into the target system, a knowledge base (KB) captures learning experiences of system performance, behaviors, usage patterns, changing customer needs, and light feedback. It is important to have this knowledge base updated during each iterative cycle [2] so that it can then direct the Analyzer to determine a more correct approach to reconfigure system components and correctly configure the metastructure to apply the necessary changes through the Receivers. The experience of updating the system configuration is fed back to the knowledge base for further enrichments and more trained or educated predictions. 4. COMPARATIVE STUDY Due to parallel development strategies for the target system with its metamodel implementation, initial development workloads are often double compared to traditional system development activities. However, this initial charge eventually pays off over the extended life cycle of the system for long-term benefits. Legacy vs. reconfigurable system A legacy system without a metamodel often cannot address newly emerging updates that may emerge while the system is in production; therefore, the system value will be demoted to a lower level at the start of each update request. During this initial phase, some end users may switch to similar service providers; but some loyal customers will continue to use the system with lower User Satisfaction Index (USI) values ​​[2,5,6], even with poor performance due to reliability. In either case, stakeholders will need to go through an expensive re-engineering process to overhaul the system or simply replace it with a new system at a much higher cost with longer downtime. Either way, it will impose a negative impact on the underlying business processes. On the other hand, with the support of a metastructure, the system becomes reconfigurable and can be readjusted with the appropriate system metadata over a period of time and thus continue to provide services for a long time. Projected system value and maintenance cost Figure 3 provides simplified projected trends of system values ​​[2] for both types of systems during their respective life cycles. Once a reconfigurable system is deployed, it may take some time for your metabase to establish the model from real-time system usage data. The establishment time (ET) is the time to initialize the complete sets of system objects and processes, as well as the corresponding meta-objects and meta-processes of your production environment. This process is necessary to correctly configure and adjust the system parameters; however, the target system is not affected in terms of its regular operations as it continues to provide its intended services (the purpose for which it was created) to customers or end users. After the initial settling time, both the target system and its metastructure will continue to run concurrently with the desired changes at this point, users will notice increased value from the system due to being able to address any desired changes to the system. On the other hand, the traditional system without meta-model will continue to have its degraded system value [2,5]. System Value (Setup Time) Minimum Acceptable System Value Figure 3. Projected system value (such as user satisfaction rating) over time Figure 4 explains a projected cost of maintenance activities over time their respective life cycles. Initially, the cost of building a reconfigurable system can be nearly twice as much as building a traditional system without any associated metamodels. However, once the system is in production (after the initial set-up time), the total cost of maintenance will decrease and continue with much lower costs towards its extended life cycle of the system. On the other hand, the traditional system will experience higher maintenance costs, due to the need for alternative ways, if any, to adapt to the unsatisfied requirements that will arise throughout its life cycle. Maintenance cost (setup time) Average life cycle of the system System with metamodel Minimum maintenance cost Average life cycle of the system System with metamodel System without metamodel System without metamodel Time Time Figure 4. Projected maintenance cost (as efforts required to address new changes) over time Observations Table 3 shows the aggregate values ​​of the User Satisfaction Index (USI) [2.5] estimated from the number of updates in system reconfigurations required for three project works programs that are integrated with semi-automated meta-structures: (1) ADMIN-ROLES: manages all administrative assignments at the graduate level, (2) APPTRACK: a system to manage thousands of online graduate applications in a distributed manner, and (3) ITAP : a system to manage the training data of new international students. teaching assistants. As we can see in the table, ADMINROLES and ITAP are becoming more mature with decreasing number of iterations through system upgrades, averaging 3.33 and 2.00 fewer iterations in subsequent years, respectively. 138

175Table 3. Effects on system value (eg, USI changes) due to system updates via external level iterations APP FEATURE MANAGEMENT ITAP TRACKING Year #USI Iterations #USI Iterations #USI Iterations n/a n /a * *activities up to mid-March, however, APPTRACK continues to experience great momentum in reconfiguring its specifications, averaging 4.66 more iterations in subsequent years. In all cases, these systems consistently experience acceptable system values ​​throughout their continuous life cycles. Table 4. Maintenance cost (for example, average days to address closely related changes) due to system updates ADMIN-ROLES APPTRACK ITAP Year Cost Cost Cost * *Through the first week of May. The complexity of the target system itself will determine whether the construction of a metamodel will be useful in the long term [3]. This requires a clear vision and objectives of the underlying business processes. Apply appropriate design patterns to Receivers and Probes; and adding predictive models [4] to the Analyzer will solidify further enhancements to the meta-modeling concept. 6. REFERENCES [1] Dov Dori, Object Process Methodology, Springer, [2] A. Mubin, D. Ray, R. Rahman, Architecting an Evolvable System by Iterative Object-Process Modeling, IEEE CSIE 2009, Los Angeles, CA. [3] M. M. Lehman, Programs, life cycles, and laws of software evolution, Proc. IEEE, 68: , [4] Using Predictive Analytics to Achieve Stellar ROI, SPSS White Paper and Peppers & Rogers Group, [5] Ives, B., Olson, M. H., and Baroudi, J. J., The Measurement of User Information Satisfaction, Communications of the ACM, October [6] Bardoudi, J. J., Olson, M. H., Ives B., An Empirical Study of the Impact of User Involvement on System Usage and Information Satisfaction, Communications of the ACM, March 1986, vol. 29 No. 3. In table 4, the maintenance cost of the three selected projects has been presented in terms of average days required to complete a task request, either from the end users or from the system administrator; that is, to cover any type of maintenance activities. Here, the cost value has been approximated directly from this calculated average number of days it took developers to spend time and related expenses (salaries, tools) to complete each task request. Compared to the generalized behavior shown in Figure 4, we see that the cost increases during the earlier periods (setup time) for these projects and, after a while, the cost of each project decreased due to obtaining a greater maturity of the underlying knowledge base. 5. CONCLUSION With the widespread adoption of the Internet and computer systems, users are more informed about similar service providers and can quickly switch to the best service available. Therefore, in this highly competitive market, a system should be able to detect changing changes in your dynamic usage patterns and apply any new requirements that arise from them. Under these circumstances, an in-place metamodel can serve the purpose with minimal effort. Our brief study presented in this paper indicates that a supporting metastructure always tends to provide significantly more controlled administration and manageability for building reconfigurable systems. Such systems can meet user needs as they arise over time, therefore retention rates are often high. However, not all systems need to be built this way; the surrounding operating environment, selected groups of users, and the type and 139

176Using Systems Discovery During Implementation of a New Mechatronics Engineering Curriculum Billy O Steen* Educational Studies and Human Development, University of Canterbury Christchurch, New Zealand Erik Brogt Academic Development Group, University of Canterbury XiaoQi Chen Department of Engineering Mechanical Engineering, University of Canterbury J. Geoff Chase Department of Mechanical Engineering, University of Canterbury *Corresponding Author ABSTRACT System sensing [1], or a feedback loop, has been integrated into the implementation of a new mechatronic engineering curriculum at the University of Canterbury through a sustained three-year collaboration between engineering professors and academic developers. Data was collected each year from the first cohort of students and faculty through focus groups, course evaluations, specifically designed surveys, and observations. The data was analyzed by the academic developers and the results and recommendations were passed on to the engineering faculty so they could adjust the curriculum, teaching, and assessments to better meet the goals they had in mind when designing the new curriculum. of studies, such as: students involved in major design projects each year and a strong connection to the industry [2]. Positive results from this approach included statements by mechatronics graduates that they had obtained basic skill sets in both mechanical and electrical rather than an initial lack of identity as neither mechanical nor electrical. Keywords: Academic Development, Curriculum Development, Engineering Education, Mechatronics Education. INTRODUCTION The Professional Mechatronics Engineering Program began at the University of Canterbury in 2003, with a limited intake of 15 students. All engineering students take core courses in physics, mathematics, mechanical engineering, engineering fundamentals, and mathematical modeling in the first (intermediate) year and specialize in the following three years (1st, 2nd, and 3rd years). professionals), leading to a BE (Hons) degree. Mechatronics, as a hybrid path between mechanical and electrical engineering, faced challenges in the development of the curriculum of these three professional years. Originally, the mechatronics program combined essential and existing topics from mechanical engineering, electronics, and computer engineering and was essentially a combination of relevant courses offered in the Mechanical Engineering and Electrical Engineering departments. However, there was a lack of coherence and a systemic approach in the synergistic integration of the three components: mechanical engineering, electronics, and computer control, which is supposed to be the cornerstone of mechatronics. As a result, several challenges soon arose: students lacked formal prerequisites for some classes. Consequently, they had limited options for electives as their study progressed. The lack of labs and design projects led to a focus on textbook teaching, leaving students with insufficient exposure to practice-oriented, problem-based training. The students were confused about their academic identity. They felt that they were neither mechanical nor electrical engineers. 140

177Partly as a result of these challenges, in the first year of graduation in 2006, only six of the original 15 students enrolled in the first (second) professional year of mechatronics completed their degrees. These challenges required a curricular review of the program to continue offering the degree pathway. This process began at the end of The new year 2 curriculum was implemented in 2007, the year 3 curriculum in 2008, and finally the year 4 curriculum in The curriculum development process deliberately sought the collaboration of colleagues outside the Faculty of Engineering. In particular, these included academic developers from the University Center for Teaching and Learning. The role of academic developers was to do "system identification"; obtaining input from students, academics and industry, and detection of the system; acting as a feedback loop where information from the system (curricular) result is monitored, evaluated and fed back to better achieve a goal. This curriculum development model allowed learning outcomes to be monitored against a set of parameters in a timely manner, continually refining course components and assessments, and optimizing degree program delivery. Special attention was paid to feedback and adjustment of the three new courses that were developed for the professional years. These courses are: ENMT201: Introduction to Mechatronics in the second year, ENMT301: Design of Mechatronic Systems in the third year and ENMT401: Mechatronics Research Project in the fourth (last) year. They are taken by students entering the Mechatronics Engineering, BE (Hons) degree programme, after completing the common engineering curriculum in their Intermediate Year (first year). CONSIDERATIONS FROM THE LITERATURE The following considerations from the fields of academic development and engineering education guided collaborative efforts in the mechatronics program. Using Student Feedback While collecting student feedback has been used for several decades as a means of measuring perceptions of teaching quality, its usefulness in improving teaching and curriculum development depends the extent to which staff respond and apply the information obtained in this way. [3]. Therefore, to create a more responsive curriculum delivery system, it is suggested to determine how to incorporate student data into the ongoing program design. In Situ Academic Development Prebble, et al. found in their research synthesis on academic staff development that the academic work group is generally an effective setting for developing the complex knowledge, attitudes, and skills involved in teaching [4]. Therefore, the combination of engineering content experts and academic developers, each with a different skill set, could be fruitful in the development of a quality mechatronics engineering curriculum. Redesigning Engineering Education According to an article by Basken in The Chronicle of Higher Education [5], a new Carnegie Foundation report advancing education, Educating Engineers: Designing for the Future of the Field, is a reiteration of the warnings from the National Science Foundation and the National Academy of Engineering that American engineering education is too theoretical and not practical enough. While Basken says engineering schools have known for quite some time that both students and employers want a more relevant curriculum, both faculty members and accreditation practices are often more committed to the traditional approach. Therefore, the intended emphasis on practical and design work in the mechatronics curriculum was in accordance with international guidelines for engineering education. These considerations regarding the use of student feedback, on-site academic development, and redesign of engineering education indicate that a responsive and effective approach to curriculum design would include: collecting feedback from students and faculty in ways that go beyond standard teaching and course assessments. using that feedback on-site and in a collaboration between discipline-based academic developers and faculty. placing that feedback within the context of calls to redesign engineering education in a more practical way. 141

178METHODS OF DATA COLLECTION AND ANALYSIS Using an inquiry-based learning approach [6] in which engineering faculty questions guided collaboration, academic developers collected data in 2007, 2008, and 2009 from the first cohort of students and teachers as they went. experienced the new curricula. Focus groups, course evaluations, specifically designed surveys, and observations served as the main collection instruments. The data was analyzed by the academic developers and the results and recommendations were passed on to the engineering faculty so they could adjust the curriculum, teaching, and assessments to better meet the goals they had in mind when designing the new curriculum. of studies, such as: students involved in major design projects each year and a strong connection to the industry [2]. In addition, final reports were generated and shared with the Study Board that oversees the Mechatronics Program and is made up of academics from the Electrical and Mechanical Engineering Departments. control interface, design and writing of control logic. These lab projects are: Introduction to Ladder Logic Control of inputs, outputs, and sensors Automation of the car wash process Water tank level control Stepper motor control DC motor speed control Motor control of AC In the second semester of the course, students are tasked with developing a control system that uses a programmable logic controller to control a 5-story elevator driven by DC motors. Figure 1 shows a programmable logic controller (PLC) platform being built in-house, and the scaled-down 10:1 elevator modeled after the actual 5-story elevator in the mechanical/civil engineering building in Canterbury. FINDINGS Data collected from the same cohort of students at the end of each new course over three years provided specific information for both individual courses and the entire program. A summary of the findings by course will be followed by conclusions and implications for the general mechatronics curriculum. ENMT201: Introduction to Mechatronics, 2007 This sophomore course is the first comprehensive mechatronics design course students have taken in the program. It is both an introduction to the discipline of mechatronics and a combination of mechanical and electrical engineering knowledge. Its content includes an introduction to mechatronics, sensors and actuators, basic instrumentation concepts, circuit analysis, computer-aided design, and an introduction to control. Along with course work, this design course consists of a series of labs in the first semester. Each lab project is a self-contained project exercise that addresses a specific application. Students working in pairs have to implement Figure 1. Elevator control project using PLC (left) and tested on the elevator model (right). Data collected from students in the ENMT201 course in 2007 suggested that students: enjoyed the class, found the content adequately challenging, and developed a sense of community or camaraderie from the program through their experiences. In the qualitative data, the areas that the students thought could be improved were mainly logistics with: more equipment for individual laboratories, coordination of assessments with other courses, same location for lectures, and consistency or more explicit explanation for the sequencing of topics. These findings of the ENMT201 course were fed back to the lecturers, program 142

179coordinator, and the Study Board that oversaw the development of the curriculum. The curriculum for the following year was developed and implemented with these findings in mind. One of the improvements was to optimize lab projects with the goal of maximizing learning outcomes within the desired contact hours. Additionally, ongoing assessments have been spread more evenly throughout the year and bottlenecks have been avoided. The elevator design project exposes students to controller design using the PID control theory covered in coursework. There was a warning about whether such a design skill in the third year was too difficult for students. Course evaluation demonstrates that students are capable of mastering that skill set. The number of elevator models has been doubled from 2 to 4, allowing each team more time in the machine for debugging and testing. Students are expected to achieve the following learning objectives: ability to identify problem requirements; ability to generate and evaluate design concepts; ability to design and manufacture a manipulator to manipulate targets; ability to design and manufacture appropriate detection mechanisms; ability to design robotic control software to perform prescribed tasks; ability to integrate, test and debug the system; and ability to communicate, document, demonstrate and present design and results. ENMT301: Mechatronic Systems Design, 2008 This course provides students with an intensive opportunity to apply their knowledge from the lectures to the creation of a robotic search and rescue vehicle in the Canterbury RoboCup competition. The project is an integral part of the year-long design course. Students, in teams of three, work in a dedicated mechatronics design lab supervised by two instructors and a senior mechatronics technician. The design project requires students to design and build a mobile robot capable of quickly locating and gathering three objects within the playing field. Human intervention is not allowed once the robot starts working. Figure 2 shows the truck base equipped with a Qwerk controller, which forms the standard development platform. The robotic system must have the following capabilities: The system hardware must be connected to the provided truck base and the interface must be managed through a Qwerk microcontroller, operated remotely from a networked computer. The targets must be picked up unharmed and stored safely in the vehicle. The robot must be able to pick up cups from any possible location, including corners or walls. Figure 2. Basic mobile robot platform equipped with an integrated controller. Data collected from students in the ENMT301 course in 2008 indicated that they: seemed to immerse themselves in and enjoy designing and building a search and rescue robot. Five different students used the word fun in individual surveys and 100% of respondents believed they had achieved something significant in the course and 100% would recommend this course to others. he did not find the project too daunting. There was a discrepancy as to the level of guidance students thought they needed, either the same amount as this year or a higher amount. they saw that their identity in the program was inherent in the nature of the course (a designated lab space, the team approach, a cool project). they saw the lasting lessons of the course as what they learned: the design process, project management, and teamwork. 143

180These findings from the ENMT301 course were passed on to faculty, the program coordinator, and the Study Board that oversaw the development of the curriculum. The curriculum for the following year was developed and implemented with these findings in mind. One of the adjustments to the course was to introduce machine vision into the classroom. Therefore, students can now design and implement a vision system to find targets. ENMT401: Mechatronics Research Project, 2009 This final senior year research project is a year-long mechatronics design exercise. Students can work in teams or individually. Most projects are industry sponsored and students are responsible for all aspects including organization, management (both time and budget), project proposal, design and prototyping, and final report. Each project has an academic supervisor and an industry mentor, who tackle a real industry problem that doesn't have a standard solution. As such, it requires substantial research and innovative design. Figure 3 illustrates the delivery of a wall-climbing robot to weld a stainless steel tank. These findings from the ENMT401 course were passed on to faculty, the program coordinator, and the Study Board that oversaw the development of the curriculum. The curriculum for the following year was developed and implemented with these findings in mind. The improvements implemented included the scheduling of the separate management course in synchronization with the research projects. In addition, the evaluation schedule was more clearly structured. Now there have been more industry sponsored research projects in mechatronics, resulting in many projects for the classroom capacity. Program Review, 2009 The graduating class of 2009 (the first to go through the redesigned curriculum) was asked to reflect on the program as a whole and identify strengths and weaknesses. Students indicated that they were very pleased with the overall mechatronics program and that the department was successful in creating an academic home for students. He considered that the program is very time consuming and demanding and noted that various topics are covered multiple times in different mechatronics documents. he had a desire for more and structured exposure to the industry throughout the show. Collectively, this revision along with course data will be considered by faculty, program coordinators, and the Board of Studies as the mechatronics curriculum continues to be developed. Figure 3. Wall-climbing robot for the automatic welding of a stainless steel tank. Data collected from students in the ENMT401 course in 2009 indicated that they: Thought they learned considerable skills in the project, with an emphasis on non-technical managerial skills. He saw areas for improvement that could include increasing lecture time, clarity of project briefs, clarity on evaluation, more specific mechatronics projects, and to a lesser extent, support and logistics. CONCLUSIONS AND IMPLICATIONS The combination of engineering and educational experience in the development of the new mechatronics curriculum has proven to be a successful endeavor. Discovery of the system and feedback provided by academic developers brought an objective perspective and new impetus. The non-engineering scholars complemented the engineering scholars by providing valuable insights in terms of setting and achieving learning objectives, managing student expectations, and advising on feedback gathering. 144

181Arguably, students, staff, and departments were more open to collaboration, feedback, and data collection, as academic developers were outside of the traditional line management structure, and therefore viewed as neutral. This experience at the University of Canterbury has led to the implementation of several effective approaches to mechatronics education, including the integration of design labs and projects within and across courses and cooperative learning. In addition to curricular adjustments, other positive results involved mechatronics graduate students stating that they felt both mechanical and electrical in core skill sets rather than their initial lack of identity as neither mechanical nor electrical. After a concerted 3-4 year effort, the University of Canterbury's Mechatronic Engineering Program has grown into a top-tier engineering program that attracts the best students from across the country and abroad. It has grown to an intake of 30 students per year, with room for expansion. Graduates are sought after by the industry. More work is needed to monitor graduate profiles and industry acceptance, which will serve as further feedback in our work towards excellence in mechatronics engineering education. This merging of mechatronics engineering content and expertise with the field of academic development has provided all involved with a unique opportunity to experience a best practice model of interdisciplinary collaboration with subsequent mechatronics program students as ultimate beneficiaries. It is anticipated that other beneficiaries of this transferable process may be other departments that develop their curricula in collaboration with academic developers. REFERENCES [1] D. Shetty, J. Kondo, C. Campana & R. Kolk, Real Time Mechatronic Design Process for Research and Education, Proceedings of the 2002 American Society for Engineering Education Annual Conference and Exposition, [ 2] X. Chen , P. Gaynor, R. King, G. Chase, P. Bones, P. Gough and R. Duke, A project-based mechatronics program to reinforce mechatronic thinking A restructuring experience from the University of Canterbury, Proceedings of the 17th World Congress of the International Federation for Automatic Control, Seoul, Korea, July 6-11, [3] R. Ballantyne, J. Borthwick, and J. Packer, Beyond Teaching Assessment by of Students: Identifying and Addressing Academic Staff Development Needs, Assessment and Assessment in Higher Education, Vol. 25, No. 3, 2000, pp [4] T. Prebble, H. Hargraves, L. Leach, K. Naidoo, G. Suddaby, and N. Zepke, Impact of Student Support Services and Academic Development Programs on Student Outcomes in Studies Undergraduate Tertiaries: A Research Synthesis, Wellington, New Zealand: Ministry of Education. [5] P. Basken, Why engineering schools are slow to change, The Chronicle of Higher Education, retrieved January 23, 2009 from: [6] V. Lee (Ed.), Teaching & Learning Through Inquiry: A Guidebook for Institutions and Instructors. Sterling, VA: Stylus Publishing,

182Internet of the Future Non-Engineering Challenges Gilson Schwartz, Edison Spina, José Roberto de Almeida Amazonas Faculty of Communication and Arts University of São Paulo Department of Digital Systems and Computing Polytechnic School of the University of São Paulo Department of Telecommunications and Control Engineering Escola Politécnica de the University of São Paulo Abstract The Future Internet is a fascinating topic from both an engineering and an educational point of view. Terms like pervasive, ubiquitous have become familiar to an increasing number of digital internet users due to the presence of the Internet in our daily lives. Quality of Service (QoS) and Quality of Experience (QoE) have become buzzwords in the network engineering community. However, we dare say that the engineering challenges facing the Future Internet are easy. They are easy for the sole reason that we know what is at stake. Thus, in this article we address the challenges of the Internet of the Future from the perspective of Systems Engineering, analyzing it as a socio-technical complex, and from the perspective of Iconics, considering the vertices of beings, things and icons. I. INTRODUCTION The Internet of the future is a fascinating subject from both an engineering and an educational point of view. Terms like pervasive, ubiquitous have become familiar to an increasing number of digital internet users due to the presence of the Internet in our daily lives. Quality of Service (QoS) and Quality of Experience (QoE) have become buzzwords in the network engineering community. However, we dare say that the engineering challenges facing the Future Internet are easy. They are easy for the sole reason that we know what is at stake. The engineering challenges of the Future Internet can be summed up in obtaining the best transmission quality among any set of end users. This problem can be easily solved by laying a perfect cable between any pair of communicating users. Perfect must be understood as a transmission medium where: i) the delay would be zero; ii) the attenuation of the signal would be zero; iii) the influence of noise would be zero; iv) the distortion of the signal would be zero. Despite knowing that there is no solution for such a problem, firstly because the speed of light, thanks to Einstein, is limited to c = 300,000 km, secondly, there is no such perfect wire and last but not at least, it is neither economically nor environmentally feasible to lay a cable between any pair of communicating users, on the other hand, assuming such a hypothetical solution, we know what is the best quality that could be achieved and this is the direction in which efforts engineering must follow, trying to approach this ideal level of performance. In short, engineering knows where to go. However, does anyone know what the Internet is related to: what is its economic value? What should be the rules governing access, exploitation, intellectual property rights, content distribution? How do people represent and communicate the values ​​and expectations associated with Internet-related actions, projects, and technologies? Conceptually, the challenges of network design and implementation (social fabric, if you will, as in actor network assemblies and reassemblies) are compounded by the simultaneous interplay of space, time, and symbol: the playful evolution of this electronic infrastructure. human corresponds with values, projects and icons for the e-superstructural audiovisual networks. The contribution of this work consists in analyzing the challenges of the Internet of the Future from the perspective of Systems Engineering, seeing it as a socio-technical complex and from the perspective of Iconomics. The iconic is based on the triad: icons, things and beings, so the actors will benefit from this new era if they are capable of appropriating three dimensions. Iconic appropriation depends on the symbolization process that, in the current context, has to be carried out in the sphere of groups and networks. Following this brief Introduction, in Section II we present a review of recently published findings on the non-engineering challenges of the Future Internet. In Section III we discuss the Future Internet from the Systems Engineering perspective and in Section IV from the Iconics perspective. Section V summarizes our conclusions and indicates some future work. II. A REVIEW OF THE RECOGNIZED NON-ENGINEERING CHALLENGES OF THE INTERNET OF THE FUTURE This section presents a brief review of the recognized non-technical challenges of the Internet of the Future based primarily on three recent publications: [1], [2] and [3]. . A. The future socioeconomics of the Internet: challenges and prospects According to [1], socioeconomics aims to understand the interaction between society, the economy, markets, institutions, self-interest, and moral commitments. It is a multidisciplinary field that uses methods from economics, psychology, sociology, history, and even anthropology. The socioeconomics of networks has been studied for more than 30 years, but primarily in the context of social networks rather than the underlying communication networks. Over the past few decades, the Internet has grown and evolved to an unprecedented size. However, its architecture is still based on the original design principles for an academic network in a friendly environment. In addition to academic use, the Internet is now 146

183used as a business platform and has become a central part of social life. The overall socioeconomic context is important, as it can significantly drive or hinder the success of an innovation; issues include the degree of mobility in lifestyle, the balance between privacy and sharing, the need for security, the importance attached to health, and the distribution of wealth. Important socioeconomic aspects include Internet Service Provider (ISP) and telecommunications provider markets, ISP peering agreements and/or transit contracts, as well as customer usage behaviors and content selections. A study of all these aspects has to include investigations of the regulations for the electronic services market and security regulations, as well as the physical environment of electronic services in terms of availability (global vs. highly focused (cities)) and reliability. . for commercial services. This approach will allow to determine (if possible) the economic growth, the maximization of the income of the suppliers and the benefits for the clients. Socioeconomic challenges can be identified in all domains of the Future Internet, including the areas of networks, services, and content. Regarding the economic challenge faced by the three areas, it is worth mentioning that the rules applied to sharing are extremely vital for the proper functioning of the Internet ecosystem and directly affect the value of the network for its users. Such challenges can only be addressed by merging the disciplines of computing and economics. The key question is: what is wrong with current Internet sharing technologies? Are they consistent with the economy? More specifically, since TCP is the dominant shared technology, is TCP economically sensible? Is Deep Packet Inspection (DPI) technology good or bad for the Internet community? What shared network technologies justify the end-to-end (E2E) paradigm from an economic perspective? What is required for peer-to-peer (P2P) to be a blessing instead of a curse? Are there bad applications or just inefficient combinations of shared technologies and pricing schemes? [1] In addition to the economic dimension, the Internet faces a significant social challenge. Today's Internet penetration has reached 20% worldwide and should reach 30% in 2015 and 50% in 2015. The Internet of the future will be able to support daily life in developed and developing countries alike. Telecommunications infrastructures must be designed to guarantee access to the Internet of the Future also where it is currently deficient. As mobile, wireless, optical and broadband communications infrastructures become ever larger and more interdependent, the number of web services is expected to grow exponentially in the coming years. These trends lead to a future Internet of billions of services in a network of equals - large companies, small and medium-sized enterprises (SMEs) and citizens - in which these services will be produced or consumed indistinctly by prosumers. In this new context, trust will become a major issue and Web 2.0 technologies are already beginning to support trust and reputation within and between computers and humans. A critical issue in Future Internet research is the current proliferation of separate efforts due to various initiatives around the world. On the one hand, this can be good for innovation, since it can produce more ideas. However, if the initiatives remain separate during the development of the Future Internet, many technologically incompatible Internets could emerge. Unlike today's global Internet, these separate networks could cause market fragmentation and even social isolation. To avoid such adverse possibilities, the design and implementation of the global Future Internet must proceed with an increasing degree of cooperation between initiatives. The mere separation of the Future Internet initiatives, if left unchecked, could become a schism leading to many incompatible Future Internets. B. Challenges of the evolution of the Internet: attitude and technology Another dimension can be added to the challenges of the evolution of the Internet. This new dimension is the attitude towards new technologies. In [2], the author states that according to economic theory, in a competitive environment with limited resources, the rational behavior of species changes so that individuals maximize a utility function that depends on these resources and the satisfaction of their needs. . The behaviors of rational individuals are limited by their attitudes. Attitude is understood as the disposition, tendency or orientation of the mind with respect to something, particularly with respect to technology. These attitudes determine the range of behaviors that are valid according to the mental disposition. To enable a rational individual to evaluate a broader range of behaviors, a change in attitude toward the elements of reality involved in satisfying his needs is needed. There is a limit at which no new technology can enable better behaviors, because they cannot be conceived in accordance with existing attitudes. At this limit, the only way to achieve better performance in a competitive environment is a change in attitudes. When the attitudes involved in the satisfaction of needs evolve, new behaviors arise and, consequently, new technologies can be developed that allow these new behaviors. The rationale for this simple model of coevolution is that attitudes require new technologies to improve the satisfaction of needs and, conversely, new technologies make possible the improvements that new attitudes allow. Internet is a tool that involves technologies that are used by users to satisfy their needs according to their attitudes. This tool has two other characteristics that determine the need to complete this model with two more requirements. These characteristics are: (1) the Internet is located in multiple locations both geographically and within society, and (2) the Internet does not have a single owner. According to the two characteristics of the Internet, there are two requirements [4] for the Internet to adopt changing attitudes and technologies. (1) Universality: any user anywhere can use new technologies and adopt new attitudes; and (2) Independence: new technology can be used and some users can adopt new attitudes, even if others do not use or adopt them, that is, there is no need to orchestrate change. The Internet evolution model is completed with two fundamental characteristics: creativity and economic viability. The former must be present whenever a change occurs intentionally. Unless it is a dangerous change, creative minds conceive changes based on their knowledge and experience. The second is necessary for evolution to be possible, according to the economy, in a context of competition for scarce resources, that is, for the underlying agents of evolution to have economic capacities to invest in new technologies and adopt new attitudes obtaining a benefit. of them. from them. Therefore, creativity, economic viability, universality and independence are necessary requirements for the aforementioned coevolution to take place. They claim that change will happen, that it will be funded, that it will not be confined to a specific community, and that it has the potential to spread virally across the Internet. In terms of objectives, there are two interests in using the Internet: the interests of individuals and the interests of organizations. Both individuals and organizations use the Internet to fulfill their needs. But in recent times, and particularly during the evolution of the web towards web 2.0 [5], individuals have shown to obtain much more benefits than organizations. Blogs, wikis, social networks, file sharing, podcasts, online applications, and other Internet advances help people to 147

184innovate to meet your needs more efficiently; consequently, the success of these new technologies and behaviors merges with their increasing number of users [6]. Furthermore, it is widely recognized that these new technologies have evolved along with attitudes [5]. Among others, the main components of this change in attitude of individuals are: participation, collaboration, trust and sharing. This success does not seem to spread equally across companies. Although companies are introduced to web 2.0, not all manage to find its benefits. Sometimes they even abandon these fledgling tools, because they don't always turn out to be more value for their business. It is recognized that certain organizations are undeniably profitable thanks to new Internet technologies, but this benefit tends to be centered in the tertiary technology sector, rather than pervasive across all sectors of the economy, including the primary and secondary sectors. The problem is that the social evolution of new economic sectors fundamentally depends on higher rates of productivity in the primary and secondary sectors. As the performance and competitiveness of these organizations are not greatly aided by current Internet trends, the financial capabilities required to ensure the economic viability of investments in new Internet technologies and attitudes are limited. It seems reasonable that one solution is to promote innovative paradigm shifts in organizations' attitudes towards the Internet, i. e., to incorporate collaboration, trust, participation and sharing. However, this paradigm shift does not consist of applying, as is, the existing patterns in the communities of individuals to companies. Competition in business is much fiercer. Collaboration, trust, participation and sharing must be reconciled with competitiveness. New behaviors must assess how to create value from trust, sharing, participation and collaboration. If a paradigm shift permeates organizations, the performance of each sector of the economy can be rewarded by new Internet technologies and can drive the evolution of Internet technology. Other companies can emerge from new attitudes and new value can be created for the economy with these new attitudes. Unexplored, unexploited and unfused information may be the key to enabling companies to capitalize on new information technologies and gain a competitive advantage. C. Roadmap for Real-World Internet Applications: Socioeconomic Scenarios and Design Recommendations In [3], the author's vision is to realize ambient intelligence in a future network and service environment, and to integrate wireless networks of sensors and actuators. (WSAN) efficiently in the Internet of the Future. Three scenarios are analyzed to roadmap some real world Internet applications. The three phases involve different levels of social change, business innovation, and technical feasibility. They are not discrete, but show a continuous timeline that depends on the context of the actual end use. 1) The first phase, now, is evolutionary from the social point of view and incremental from the technological point of view, since it is the least integrated: the infrastructure of a shopping center is used for applications dedicated to the stakeholders of this place . 2) The second phase - New - is more futuristic from the socioeconomic point of view and innovative from the technological point of view since it implies the deployment of connections between different and separated areas of the city and begins to integrate different entities in extension to the shopping. mall, eg. WSAN private residential infrastructures. 3) The third phase, the next one, is the most revolutionary from the point of view of society because it involves holistic applications of RWI. It proposes a completely horizontal vision of RWI applications with integration of all types of WSAN infrastructures in the city for the provision of an unlimited scope of applications. This is a disruptive vision compared to existing Internet technology. An RWI system has to challenge to provide benefits to the user and society through FI applications in key domains such as environment, mobility, security, professional and industrial activities, citizenship and ethics. Today's Internet will change from a distinct network, providing specific services accessible through dedicated terminals, to an Internet dissolved in the artifacts of the physical world accessible through heterogeneous networks that will allow users to navigate the world while browsing the Internet. . The RWI framework must support the horizontal use and reuse of common WSAN infrastructures to develop a variety of applications. Therefore, it should not require as many WSANs as applications. The RWI system architecture must be scalable to allow its functions to evolve to meet future requirements of technological growth and change. The RWI system must ensure the continuity of the services that the user needs with adequate quality despite the user's mobility. The RWI framework should reduce complexity to allow user applications easy access to detection and action services that are available everywhere. The RWI framework should provide mobile users with a good level of security and privacy protection. Will RWI applications improve users? security in various activities, particularly in the transportation, built environment, crisis management and healthcare domains. Are RWI apps expected to increase a sense of community by making people's side effects noticeable? behavior. RWI will support professional and industrial activities. These benefits will be noticeable in the short term as shown in the Now scenario. The RWI system should support new business opportunities and new industry partnerships by optimizing the integration of detected and controlled physical phenomena on the Internet. With the integration of a real world dimension into the Internet, privacy and related ethical issues will increase. Even if RWI technology integrates the appropriate mechanisms, privacy and ethics may persist as critical issues and mistrust may delay adoption despite the fact that it allows for an open and secure market space for context awareness and interaction in the world. real. D. Concluding Remarks The three documents reviewed provide a detailed look at the non-engineering challenges facing the Internet of the future. However, this discussion is far from over. It seems that a more integrated assessment of such challenges is required, while emphasizing the human aspects. In addition, new dimensions remain to be included in the analysis. The first theme will be addressed in Section III, which uses the concepts of Systems Engineering, while the second theme will be addressed in Section IV, which introduces Iconomatic vertices of beings, things, and icons. third THE INTERNET OF THE FUTURE: A SYSTEMS ENGINEERING PERSPECTIVE One of the main difficulties in understanding the challenges of the Internet of the Future is its complexity. The systems engineering perspective provides a means to harmonize the different dimensions that make up the Internet. In this section we show how the Systems Engineering perspective helps to have an overall vision of such a complex problem, giving the human being a leading role. 148

185A. Solving Engineering Problems Descartes' saying that every problem should be divided into as many separate simple parts as possible (reductive analysis) is the most successful technique ever used in science. Engineering, as a science of constructive problem solving, uses this principle to reduce problems into the smallest possible parts to arrive at the disciplines assigned to each smallest problem and, based on the fundamental phenomena and materials, deal with the problem. necessary or possible solution. This is the main method of designing solutions for building systems. B. Systems Engineering The word system has a subjective character. It is used to refer to forms of organization that are associated with the way in which men recognize them; the constructivist vision of reality determines that a system does not exist in the real world independently of the human mind [7]. Systems engineering, unlike other traditional engineering disciplines, does not follow a set of fundamental phenomena based on physical properties and relationships. Instead, it is about the knowledge needed to handle these phenomena, dealing with the emergent properties of the system, looking for a way to control the entropy of the system [7], [8]. The reductive analysis and relatively simple construction of the parts became more unwieldy as the systems got bigger and bigger. C. Complexity The big new problem in large systems engineering has become complexity. The complexity arises when there is a set of characteristics of the system that are not present in any of its parts by itself. They are characteristics of the whole or of the coexistence of the parts working together. They are called emergent properties: perhaps the simplest example could be the human body. Life is an emergent property that does not exist anywhere apart from the body. The Internet is complex; it is based on a large number of subsystems that work together, which is generally referred to collectively as the e-infrastructure. Many layers, not just technical ones, are interacting to fulfill the tasks assigned to them. D. Sociotechnical systems Information systems are built considering the actors (man and social institutions) and technology. It is a socio-technical system, a system in which there is a social infrastructure (man and social institutions) and a technological infrastructure. The consideration of these two infrastructures is crucial to identify the correct factors for the quality of the services and to identify which are the interested parties. expectations, to give them the experience they expect, surprising them whenever possible [8], [9], [10]. Considering only an electronic infrastructure, a computer network, for example, is just a technological artifact. It has a purpose, a meaning, only when one or more people use it to perform some task, such as searching for information or processing data to solve problems. The technological, human and social components of an electronic infrastructure system cannot be seen solely as the sum of its components. There is a complex interaction between them, with emergent properties. Another problem that contributes to the complexity of the e-infrastructure system is that many of the systems used today were not developed in an integrated way. They were assembled gradually, resulting in a kind of mosaic, with new and old technologies, people, and social institutions. New designs must respect this scenario, considering new and old technologies and various actors (such as users, consumers and social institutions). These actors want to optimize their decisions, thinking about their own subsystems, proposals and interests [11]. Large systems engineering had very good answers before and during World War II, but when the war ended, another new set of problems arose. These new issues arose from the dawn of a completely new player in the game: the consumer. What does the consumer want? In other words: what is he willing to pay for? What are the requirements? E. Requirements engineering Requirements engineering is just one engineering discipline, crucial in the development of any product or service. This engineering has a life cycle that leads the systems engineer in the process of obtaining requirements, negotiation, documentation and validation of the systems to be developed. The systems engineer makes use of this process to execute a task that Kossiakoff and Sweet [12] call the concept definition phase and INCOSE [13] calls the concept stage. Both refer to the initial phase of various life cycle models set by the engineering statements for the development of information systems. In the requirements process, the elicitation phase deals with people. This requirements gathering process should be based on the knowledge and experience of the directors, managers, employees, etc. of the organization that demand the system. The systems engineer needs to talk to the people who demand the new systems and to the people who will be affected, positively or not, by the system. In general, all these actors are organized in groups, formal or not, with different purposes; in such a way that the whole does not have a clear purpose and the groups pull in different and often contradictory directions. The elicitation phase is essentially a system of human activity that can bring some degree of order to the situation of multiple demands, purposes, issues, and problems. Using appropriate methods to progressively increase the order in the requirements gathering process and achieve a point at which specific designs and solutions can manifest, systems engineering takes an approach to achieving all three types of requirements that Kano [14], [15], [16 ] establishes that it must be present in a product or service. These requirements allow engineering to understand how meeting or exceeding stakeholder expectations affects satisfaction in the relationship with the system. These types of requirements are: Normal requirements: are the requirements that are explicitly required. Expected requirements: These requirements are so basic that stakeholders can sometimes omit to mention them, because they consider that it is not necessary to request them explicitly. A system without these requirements is highly unsatisfactory, but meeting these requirements often goes unnoticed by most stakeholders. Exciting requirements: these requirements are those that if they are not present in the system, their absence will not be perceived, they will not leave the interested party unsatisfied. Since these requirements are not formalized by the interested parties, that is, the interested parties are not apt to express them, it is the responsibility of the engineer to explore the problem and the opportunities to discover such unspoken elements. For example, as the engineer gains insight into stakeholder needs, he can use his experience to propose features that were not requested but that can improve the efficiency and effectiveness of the system. F. Final considerations Man has personality, hopes, fears, dreams, values ​​and intentions. Do not consider these human dimensions to build systems ultimately 149

186dehumanizing human-system interaction, and it's expensive! IV. THE INTERNET OF THE FUTURE: AN ECONOMIC PERSPECTIVE The iconic one, from a very broad perspective, results from a critical review of the political economy and macroeconomics of technology transfers and the design of markets aligned with the Center-Periphery system. The evolving network of actors develops and unfolds in the 21st century, generating new tools for the creation, management and critique of the information economy as a relatively open and simultaneously global and local network. To the existing technological and economic gaps are added social and cultural differences that are immaterial or intangible, and that are more related to the field of icons than to the requirements of things (hardware, software) and beings (evolving social networks). . The iconicity of this evolutionary development is also an index of new metrics for the consumption and creation chains of audiovisual knowledge. The intangible assets thus produced (real, digital or virtual) are differentially appropriated by individuals, groups and owners of property rights (in all asset classes). The recovery of the world economy depends as much on this new accountability as on the survival of this bank or that company. However, with the global Internet coexists a precarious regulatory ecology where no one is entirely sure of its economic value. What should be the rules governing access, exploitation, intellectual property rights, content distribution? How do people represent and communicate the values ​​and expectations associated with Internet-related actions, projects, and technologies? Conceptually, the challenges of network design and implementation (social fabric, if you will, as in actor network assemblies and reassemblies) are compounded by the simultaneous interplay of space, time, and symbol: the playful evolution of this electronic infrastructure. human corresponds with values, projects and icons for the e-superstructural audiovisual networks. The emergence of mobile and immersive applications and infrastructures (audiovisual, virtual and real) will significantly expand the uses of the available grids, as well as the skills and knowledge necessary for the proper creation, production, management, financing and distribution of information. wealthy devices will be left behind at a more than proportional rate. The management of audiovisual tools for human and local development involves both the narrator and the person in charge of surveillance in the same local neighborhood. Issues of privacy, intimacy, governance and intellectual property are at stake. On the other hand, access and use must be weighed against skills to maintain a balance between information supply and demand in the long term. However, the regulation of information asymmetries is not just an economic issue as such, it implies the control of strategic energy and telecommunications infrastructures, as well as interference with content production and consumption flows, environmental effects and questions of national identity (iconic). A. The Brazilian case The Brazilian icononomy has evolved through three stages of digital inclusion frameworks designed by federal and state agencies: access, open source and audiovisual, with a growing number of public financing mechanisms, as well as articulation with other public policies in areas such as education, science, technology and innovation, culture and telecommunications. But without a general ICT development policy, which may be one of the explanations for the drop in Brazil's relative position in the ICT Development Index. A second, more political and institutional issue comes to the fore, given the emphasis on public funding of local content production and recent attempts to reconstruct state-led broadcasting, social control of communication and regulation. of digital television in Brazil. The scenarios of future audiovisual policies and their impact on local development strategies must be discussed in depth, taking into account the limited impact of current policies on income generation and distribution, as well as on the creation of sustainable markets for audiovisual media. local audiovisual production. B. Final considerations Perhaps the ideal scenario is that of an emerging mediapolis, as in Livingstone, where it is the mediated space where we can communicate, get to know each other and take responsibility for each other. As a space where multiple mediatized voices talk about the media and its centrality in everyday life. A space where the media and its work in culture, politics, economy and ethics are critically discussed. A space where academics, students, producers and consumers talk about the indescribable and get involved with the challenges of a multi-media society. A space where the presence of multiple voices in the same discourse is recognized and respected. A space where criticism is practiced in a spirit of plurality and hospitality [17]. Silverstone is based on Hannah Arendt and her deliberations on the notion of republican democracy in the face of totalitarianism, imperialism and, of course, the threat of mass society. Often unfairly dismissed as a conservative critic, Silverstone seeks to rediscover through Arendt the public art of being with others. In particular, Ella Arendt highlights the role of public judgment, responsibility and perhaps above all the human capacity to think as the best shield against political catastrophe. A new global political culture, then, does not come about through a McLuhan technological transformation, but rather depends on our shared moral and intellectual capacities. In particular, the ability of the media to stretch the relationships of time and space raises questions related to our civic imagination [18]. This mediated space or mediapolis is a public sphere open to language patterns such as digital emancipation and other creative expressions of civic intelligence [19]. V. CONCLUSIONS AND FUTURE WORK In this paper we focus on the non-engineering challenges of the Future Internet as the most difficult issues to address. After reviewing some recent publications on the socioeconomic dimensions to consider in the development of the Future Internet, we postulate that the discussion is not over. We broaden this discussion by presenting a systems engineering perspective that provides a more integrated approach and places the human dimension at the center of the process. In addition, we present the iconic perspective that shows that digital inclusion and the appropriation of digital technology by prosumers depends on considering a completely new set of values. As future work we intend to carry out a comprehensive study of the Future Internet, to develop a model that includes technological, social and economic dimensions to produce integrated roadmaps for different technologies, applications, services and businesses. REFERENCES [1] D. Hausheer and et. al, Socioeconomics of the Internet of the Future: challenges and prospects, in Towards the Internet of the Future. IOS Press, 2009, pp [2] J. M. Rubina, Challenges of the evolution of the Internet: attitude and technology, in Towards the Internet of the Future. IOS Press, 2009, pp.

187[3] F. Forest, O. Lavoisy, M. Eurich, J. V. Gurp, and D. Wilson, Roadmap for Real-World Internet Applications: Socioeconomic Scenarios and Design Recommendations, in Towards the Future Internet. IOS Press, 2009, pp [4] S. Ratnasamy, S. Shenker, and S. McCanne, Toward an Evolutionary Internet Architecture, in Proceedings of the 2005 Conference on Applications, Technologies, Architectures, and Protocols for Computer Communications. ACM, 2005, pp [5] T. O Reilly, What is web 2.0, Available at , accessed May, [6] The 15 Most Popular Web 2.0 Websites, Available at , accessed May, [7] L. Skyttner , General Systems Theory: Problems, Perspectives, Practice. World Scientific Publishing Company, 2nd Edition, New Jersey, [8] D. K. Hitchins, Systems Engineering: A 21st Century Systems Methodology. John Wiley and Sons, Chichester, [9] M. Ottens, M. Franssen, P. Kroes and I. V. D. Poel, Modeling Infrastructures as socio-technical systems, International Journal of Critical Infrastructures, vol. 2, no. 2-3, [10] V. Bryl, P. Giorgini, and J. Mylopoulos, Sociotechnical Systems Design: From Stakeholder Goals to Social Networks, Requirements Engineering, Vol. 14, no. 1, [11] M. Houwing, P. W. Heijnen, and I. Bouwmans, Sociotechnical Complexity in Energy Infrastructure: A Conceptual Framework for Studying the Impact of Household Energy Generation, Storage, and Exchange, in Proceedings of the International Conference on IEEE systems, Man and Cybernetics. IEEE, [12] A. Kossiakoff and W. N. Sweet, Systems Engineering Principles and Practice. John Wiley and Sons, New Jersey, [13] Systems Engineering Handbook, Version 3, INCOSE-TP (2006). [14] G. H. Mazur and A. Jurassic, Jurassic QFD, in Transactions of the 11th Symposium on Quality Function Deployment. QFD Institute, Michigan, [15] G. H. Watson, T. Conti, and Y. Kondo, Quality into the 21st Century: Perspectives on Quality and Competitiveness for Sustained Performance. ASQ Quality Press, Milwaukee, [16] The kano model, Available at [17] Mediapolis by levingstone, Available at [18] N. Stevenson, Roger Silverstone: an intellectual appreciation, European Journal of Cultural Studies, vol. 10, no. 4, [19] G. Schwartz, Digital Emancipation, Public Sphere Project,

188The Engineering Virtual Enterprise: A Framework for Soft and Entrepreneurial Skills Education Edgar E. Troudt, Christoph Winkler, A. Babette Audant, and Stuart Schulman Institute for Virtual Enterprise, CUNY Kingsborough Community College Brooklyn, NY 11235, USA ABSTRACT This paper seeks to address common workplace deficiencies present in associate level Pre-Engineering students, as well as cultivate entrepreneurial spirit and ability. These deficiencies cause attrition in the number of engineering graduates and have been identified by employers as important to the modern engineering workforce [9]. Our proposed platform, STEMbased Virtual Enterprise (VE), is a business simulation program in which students start and run a virtual company within their classroom. The program is now being used successfully in other STEM areas, particularly biotechnology and information technology 1. The STEM-VE program, a combination of classroom pedagogy, software tools, and an international network of participants, is designed to develop entrepreneurial competencies, interpersonal skills (eg teamwork, effective communication when working with people in different roles and positions), critical and analytical thinking, and problem solving. Each student assumes a position within the firm and carries out the responsibilities of the firm's department. This document specifically sets out the framework for two Engineering Virtual Enterprise (ve eng) commitments that campuses could adopt to provide a comprehensive experience for Pre-Engineering students. 1 The previous work described is based on funding from the National Science Foundation (under DUE [14] and DUE [13]). All opinions, findings, and conclusions or recommendations expressed in this material are those of the author(s) and do not necessarily reflect the views of the National Science Foundation (NSF). 1. BACKGROUND The American Recovery and Reinvestment Act of 2009 supports innovative solutions to pressing national problems in science, health, the environment, education, public safety, and other critical areas, as a way to spur economic recovery and job creation that they are so needed. The Act, which provides tax breaks and increases funding for research and science, is based on the belief that prosperity [1]. Last December, the president proposed another round of economic stimulus measures that target job creation and support small businesses through granting loans, tax credits, among other measures [3]. In engineering fields, the importance of business innovation is recognized: engineering is a key component of innovation and of our technological society. Changes on a global scale are occurring rapidly for engineering, and federal leadership is needed to respond quickly and informatively [11]. t the interface between [6]. Engineers apply scientific and mathematical principles to design practical solutions. While engineering education has traditionally had tensions in solving engineering problems, nowadays more and more they are not technical, but found in society and [6]. In a special issue of the Journal of Engineering Education, Sheppard, Pellegrino and Olds [15] write: "It would be naive to treat technical and non-technical challenges and opportunities as separable. The boundaries are 'increasingly blurred' and engineers are called upon to To design solutions for our increasingly complex world, preparing them to meet these challenges and opportunities will require designing and delivering revamped engineering courses and programs.Active Learning Strategies 152

189how Virtual Enterprise teaches and encourages students to be creative, experimental, and entrepreneurial, preparing them to overcome these limitations. There is wide recognition of the need to challenge ways of teaching engineering and related STEM fields. Researchers have found that hierarchical classroom environments discourage collaborative learning. Using a [2]. VE would fall into this category of interventions. Going further, Turner [17] concludes that soft or non-technical skills do not develop in isolation from technical skills. Rather, hard and soft skill development interact in meaningful ways, with each process strengthened and informed by the other. The rationale for the Engineering Virtual Enterprise program is based on studies of project-based learning and curricula that infuse entrepreneurship into engineering and technical education courses, supporting and strengthening soft skills development, as well as improving recruitment and student retention: Tubaishat [16] showed that including topics such as project management and product development in a Computer Science course helped students deal with non-technical aspects of project development, strengthening their communication and other soft skills Problem-based learning, like VE, was successfully integrated into an engineering program where students worked in teams to design a product that met predetermined criteria. Students became familiar with the design process and the importance of working within a strict time frame as they would in a real world environment [5]. The integration of entrepreneurship and related project management skills into engineering curricula enhances the profiles of students as integral participants in the knowledge-based economy [12]. The University of Edinburgh developed a series of interdisciplinary courses for and [7]. By concentrating efforts on a single project, student teams were able to see the connections between g-design and economic viability. The engineering project was also improved. Dabbagh and Menascé [4] insight into the engineering profession in a first-year course that introduced business concepts using a project-based model. In the course, student teams formed IT companies that competed to develop business support software in a simulated market environment. Compared to students in engineering careers and engineering business opportunities. Researchers suggest that infusing engineering curricula with entrepreneurship and project-based learning can increase student recruitment and retention [4]. 2. PLATFORM: VIRTUAL ENTERPRISE Virtual Enterprise (VE) contextualizes disciplinary content by having students develop realistic and commercially viable projects in the classroom. The lines between disciplines are blurred as students take a holistic view of their company and its tasks; Soft skills are integral. The authors, rectors of the Institute for Virtual Enterprise (IVE) of the City University of New York (CUNY) have nine years of experience in the design and administration of this type of programs, both academic and non-credit, that help students develop software and entrepreneurship. essential skills for success in the workplace. CUNY-IVE operates programs that -three campuses, serving about 450,000 students. VE has teams of students who act entrepreneurially while designing and operating simulated companies in the classroom. The simulation is a combination of active learning pedagogy in class, a virtual economy (the IVE MarketMaker) with additional software tools, and an international community of simulated student-run businesses (the IVE Partner Network). Through VE, students gain soft and entrepreneurial skills while making concrete use of content from their academic careers. Soft skills are interpersonal skills (such as teamwork, effective communication when working with people in different roles and positions), critical and analytical thinking skills, and problem-solving skills that are part of the learning outcomes of the course. 21st Century League for Innovation [18]. Each student assumes a position within the company and carries out the responsibilities of the company's department (such as Research and 153

190Development: establishment of schedules and ethical protocol; Purchases: establishment of inventory procedures). Students have a blended learning experience in which they engage in face-to-face activities both within the simulation (such as participating in virtual company staff meetings, writing business plans, developing lead surveys) and outside of the simulation (such as , discuss articles on industry issues, explore various organizational models). The STEM-VE experience, which customizes the simulation for STEM career students, has proven to be a powerful method of instilling a passion for the underlying STEM discipline and a sense of entrepreneurial effectiveness. Results of NSF funding of the Advanced Technology Education Program [13,14], where the STEM-VE curriculum was infused with IT content, produced a high level of student interest and engagement. Especially interesting is one such study conducted with students from populations typically underrepresented in IT. This study found that these entry-level students reported a large change in attitude towards the discipline and aspired to study and work in the field [8]. 3. VIRTUAL ENTERPRISES FOR ENGINEERING (VE ENG CAREERS AND VE ENG PROJECTS) This framework proposes two engagements as "bookends" in a pre-engineering major at the associate level as follows: Ve eng Careers A pre-engineering institute non-credit semester designed to help students entering the pre-engineering major understand the breadth of engineering careers, the many fields and types of organizations in which engineers work, and the variety of technically and socially important problems they help to solve. This approach reflects the recommendations made by the National Academy of Engineering: "Students who are introduced to engineering design, engineering problem solving, and the concept of engineering as a servant of society early in their college education are more likely to continue their engineering programs to the end". . The same approach is also more attractive to women and underrepresented minority students" [9,11]. Students explore the mix between engineering careers and their own strengths, interests, and values. The engagement will have small groups of students and Simulate various types of engineering firms, identifying and providing rudimentary solutions to industry-relevant problems Interest in engineering and motivating them to persist in engineering degree programs and ultimately engineering careers Five projects of engineering A three credit course in which pre-engineering students nearing the end of their associate degree operate a simulated student-run engineering company This course provides students with the opportunity to apply and practice their knowledge of engineering and applied mathematics while participating in activities that promote the development of business and employment skills. Participating students will act as entrepreneurs taking the company from its business plan to the conceptualization of the engineering project through planning for deployment. This experience is expected to enhance the problems engineers help solve, (2) understanding of the process of starting an engineering company, and (3) entrepreneurial and soft skills. Focus Projects The National Academy of Engineering has organized the state of engineering problems into fourteen categories as part of EngineeringChallenges.org [10]. Of these categories, seven have sufficient ongoing examples within the engineering community, sufficient literature, and are an accessible level for community college pre-engineering students. These categories will form the thematic focuses of the two ve eng commitments (ve eng races and ve eng projects): 1. Provide access to clean water. 2. Restore and improve urban infrastructure. 3. Advanced health informatics. 4. Design better drugs. 5. Prevent nuclear terrorism. 6. Safe cyberspace. 7. Advance personalized learning. Both ve eng majors and ve eng projects will select their sub-discipline and engineering project options from this list. engineering 154

191Instructors responsible for delivering the courses will compile data on the careers involved in these areas by giving students access to a number of online resources (eg links, career websites and transfer institutions). Expected Results The authors hypothesize that the net result of these commitments will be: 1. Students participating in the Veing Engineering Career Institute will become more aware of the breadth of engineering projects and careers, and will plan study options and future races accordingly. 2. The Ve Engineering Career Institute will improve retention and recruitment in the pre-engineering career, as evidenced by a significant change in attitude towards engineering disciplines. 3. Project-based learning experiences, as offered in five-eng projects, will instill demonstrable improvement in problem-solving skills, soft skills, and entrepreneurial self-efficacy. 4. VE ENG CAREERS industry, identify particular career paths, and make informed decisions about your options within the field. Phase II: Problem Identification During Phase 2, the company's customers will identify potential problems/issues/needs and gather ancestral demands to support the problem-solving process in Phase 3. Management demands and needs and capabilities of eventual users must be discovered during this phase. As part of this process, students will become familiar with the variety of products and services that other companies already offer. Students will carry out an assessment of the company's needs and agree on a priority project in Phase III: Problem solving The company devises a possible solution to the problem identified in the previous phase and presents it to management. This solution will likely be simplistic and may involve bringing together several existing solutions from other companies. Members of management can be simulated by faculty colleagues, corporate partners, or other members of the international VE network. Students will prepare Figure 1: The VE-eng Careers Engagement Flow. The five engineering majors (pre-semester, non-credit institute) will help students entering pre-engineering majors understand the breadth of engineering careers, the many fields and types of organizations in which engineers work, and the variety of technically and socially important problems. help solve. Initially, the engagement will follow this general plan (see also Figure 1): Phase I: Career and Department Exploration During this phase, the organizational structure of an engineering firm is established. Students research staff positions. The instructor can facilitate this process by using a sample organization chart of a fictitious company or partner and directing students to career research websites. This process allows students to explore the variety of careers in the industry and deliver technical reports on their solutions, as well as presentations aimed at a non-technical audience. Phase IV: Reflection and Review Based on the feedback provided in the problem-solving phase, students will revise their product descriptions. Rudimentary SWOT (Strengths, Weaknesses, Opportunities and Threats) reports will be prepared on the company and the proposed solution(s). 5. VE ENG PROJECTS The VE ENG Projects course (3 credits, 45 contact hours) will have students act as classroom entrepreneurs in a 155 simulated engineering company.

192development of business plans, conceptualization of engineering projects and planning for implementation. Students will finish the course with an understanding of how entrepreneurs operate and how "intrapreneurs" start new ventures within a larger company. Much of the development work will involve the project team developing numerous projects within the focal areas. The operation of the course will follow this general pattern: 1. Review a sample of the breadth of engineering companies and products. 2. Select a niche within engineering; Create a corporate identity. 3. Research and design the structure including departments and officers. (Typical departments for the simulation include: marketing, finance, human resources, and R&D.) 4. Personnel positions within the hierarchy (organizational chart). 5. Design a flagship product or service; produce a demo product. 6. Conduct market research (on VE students or corporate partners); or in the case of presenting the limited version of the product for comments. 7. Apply for funding by developing target consumer identification and marketing strategy. 8. Implement the product, beginning with implementation plans and Gantt charts, and ending with the appropriate technical documentation. 9. Develop promotional materials, including analyzes that specify quantities such as TCO and ROI, if applicable; develop website and presentation for non-technical audiences. 10. Presentation of the product(s) to potential consumers and/or integrators. 6. CONCLUSIONS The framework described here extends the highly effective STEM-based VE program, whose evaluation research [8,13,14] has shown to be extremely successful in its adaptation to the Information Technology (IT) and Biotechnology disciplines. This commitment adds to the possible modalities for engineering business education at US institutions. This would fulfill a national priority to redevelop the workforce with 21st century communication and entrepreneurship skills, as outlined in the American Recovery and Reinvestment Act (ARRA) and other education initiatives by President Barack Obama. While the described scope of this paper focuses on community college offerings, the potential for a broader impact on the national discussion on engineering curriculum reform is great. If offered at both the community and university levels, it could serve as a bridge between these institutions. In addition, the ve eng-Careers construct can be used to help community college students explore open career potentials when transferring to bachelor's programs at local community colleges. 7. REFERENCES [1] ARRA. (2009). American Recovery and Reinvestment Act. Retrieved from: [2] Barker, L. and Garvin-Doxas, K. (2004). Making Visible the Behaviors that Influence the Learning Environment: A Qualitative Exploration of Computer Classrooms. Computer Science Education, 14(2), [3] Calmes, J. (December 8, 2009). Obama offers a plan to small businesses. The New York Times. Retrieved from: [4] Dabbagh, N., & Menascé, D. (2006). Student perceptions of entrepreneurship in engineering: an exploratory study. Journal of Engineering Education, 95 (2), [5] Denayer, I., Thaels, K., Sloten, J. & Gobin, R. (2003). To teach undergraduate engineering students a structured approach to the design process through problem-based education. Journal of Engineering Education, 28(2), [6] Grimson, J. (2002). Curriculum reengineering for the 21st century. European Journal of Engineering Education, 27(1), [7] Harrison, G., Macpherson, E. and Williams, D. (2007). Promotion of interdisciplinarity in engineering education. European Journal of Engineering Education. 32(3),

193[8] Mórgulas, S. (2007). Virtual Information Technology Enterprises (VEIT): An Integrated Vehicle for Technology Education Reform: Final Assessment Report. Center for Advanced Studies in Education. [9] National Academy of Engineering. (2005). Educating the Engineer of 2020: Adapting Engineering Education to the New Century. Retrieved from: #knock. [10] National Academy of Engineering. (2008). Great Engineering Challenges. Retrieved from: allenges.aspx. [11] National Science Board. (2007). Advancing towards the improvement of engineering education. Retrieved from: _2.pdf. [12] Papayannakis, L., Kastelli, I., Damigos, D. and Mavotas, G. (2008). Promoting entrepreneurship education in engineering curricula in Greece. Experience and Challenges for a Technical University. European Journal of Engineering Education. 33(2), [13] Schulman, S. & Troudt, E. (2008). Enhancing business and soft skills training for two-year college technicians through a contextualized business simulation program. (NSF ATE Grant: EXPIRES, $749,217). [14] Schulman, S. and Deutsch, J. (2005). Virtual Information Technology Enterprises (VEIT): An Integrated Vehicle for Technology Reform. (NSF ATE Grant: EXPIRES, $149,990). [15] Sheppard, S., Pellegrino, J. & Olds, B. (2008). On becoming a 21st century engineer. Journal of Engineering Education, Special Issue: Educating Future Engineers: Who, What, and How. 97(3) [16] Tubaishat, A. (2009). IT Systems Development: IS curriculum course that combines the best practices of project management and software engineering. Problems in information science and information technology. 6, [17] Turner, R. (2004). Towards a structural model that connects hard skills, soft skills and working conditions and the IS professional: the student's perspective. Problems in information science and information technology. 1, [18] Wilson, C.D., Miles, C.L., Baker, R.L. & Schoenberger, R.L. (2000). 21st Century Learning Outcomes: New Competencies and Tools for Community Colleges. League for Innovation in Community Colleges. The Pew Charitable Foundation. (ERIC Document Reproduction Service # ED439751). 157

194Refocusing Engineering Design for a Sustainable Living Environment Larry Arno VINT Design Department, Griffith University South Bank, Queensland 4101, Australia ABSTRACT Designers see themselves as an integral part of the creative industry; however, they actually represent the business consequence. Designing involves solving problems and improving people's lives; therefore, what designers create with each decision comes at an environmental price. The dilemma many designers face is seeking an internalized balance between readiness to make informed decisions to incorporate sustainable practices and the constraints of participating in a commercial, profit-driven venture. Thus, ethical design with its degree of sustainability forces designers to make a choice. Keywords: Design, Engineering, Sustainable Design, Climate Change, Global Warming and Education 1. INTRODUCTION Design engineers, whether they are disciplined in electrical, mechanical, civil or architectural, are considered an integral part of the creative industry, however, in Actually, they represent the business of consequences. Whether you are designing a new product, system, or inner workings of a particular design; from the preliminary development stage to the design of the most critical parts; the synthesis of combining different ideas, influences and/or objects in the design stages are paramount. Engineering design involves solving problems and improving people's lives; therefore, what engineers create with each decision can be multiplied by thousands and often millions through mass production. On the contrary, every item produced has an environmental price. Chochinov [2] in his manifesto for sustainability in design wrote that we are suffocating, drowning and poisoning ourselves with the things we produce, wearing down, off-gassing and seeping into the air, water, land, food and designers are fueling this cycle. , helping to turn everyone and everything into a consumer or a consumable. Engineers must understand the role and impact that manufactured products and built environments have on the world. The engineering process is part of a toolkit to solve problems and improve lives, not just in the short term, but for generations to come. Furthermore, design is a means, not an end. To develop sustainable solutions, an engineer has to transform concepts, considering the impact of the produced design. The impact is much greater than the mere interaction with the consumer of the intended design; as a consequence of the production and use of the design, it has an impact on humanity globally, with ramifications for people, the environment and the economy. Furthermore, the consequence of the design is not only influential in the current sense, but also has implications for generations to come. The materials used to make the design, the resources needed to make, package and sell it, the quantity, quality and longevity of the product, and whether it should have been designed in the first place all have a major influence on sustainability. design. Design engineers are often implicated in the current environmental crisis due to their active participation in promoting a culture of market saturation with unnecessary products, excessive engineering, and encouragement of mass consumption of materials [7]. Detailed in Papanek's book [21], Design for the Real World, the designer has become one of the most damaging professions. Yang and Giard [34] state that the design profession is both the problem and the solution, so design engineering professionals and students need to understand the ecological impact of their profession. Findeli [5] writes that without a responsible designer, responsible design will not occur. It has become imperative that sustainability be encouraged within the profession and taught in the engineering/design curriculum at both the school and university levels. 2. CLIMATE CHANGE Climate change is one of the greatest challenges facing our generation [23]. The design engineers and politicians of our time will be judged by future generations on their ability to rise to this challenge. The latest research shows that climate change will harm all economic, social and environmental aspects of life [28]. The Prime Minister of Tuvalu, Telemi [27] declared Never, in the history of mankind, have we faced such a global challenge. We leaders must do this (address climate change) for our children and our children's children. Fry [11] wrote, actually I would say that we are at a tipping point, the future of humanity as we understand it is really before a choice that says: do we change direction or try to keep what we already have? The challenge we have now is to deal with the world that we have created, and sustainability in that sense is both a kind of process and a project that is about that exercise, dealing with the world that we have, creating a future. it is about creating another type of leadership, other types of ways of life, other types of economies, recognizing that we are in a very dangerous situation and that, to be sustainable, we have to be able to eliminate conflicts and damage to the environment. . Organizations including the United Nations [31][32], UNESCO [29][30], the International Association of Universities [15], the UK Government [13] and internationally recognized climate change theorists Clark [3 ], Fry [8] Laszlo [ 17], Rebelo [24] indicate that climate change is a design problem, and through sustainable education human beings can address one of the key priorities of the 21st century. Ecological architect Van der Ryn, a renowned researcher, theorist, educator and leader in sustainable architecture, wrote: "In many ways, the environmental crisis is a design crisis. It is a consequence of how things are made, buildings are constructed and landscapes are used Design manifests culture, and culture rests firmly on the foundation of what we believe to be true about the world Our current forms of agriculture, architecture, 158

195engineering and industry derive from design epistemologies incompatible with those of nature [33]. A September 2009 statement by the chair of the Intergovernmental Panel on Climate Change (IPCC), Dr. Pachauri [20], on the United Nations Climate Change Summit, Copenhagen, cited the findings of the Fourth Assessment Report ( AR4) of the IPCC; collective research of four thousand specialists over a period of 5 years; that if no measures are taken to stabilize the concentration of greenhouse gases in the atmosphere, then the average temperature by the end of this century will rise by between 1.1 and 6.4 degrees C. Figure 1 illustrates the average temperature forecast for the IPCC taken in Increased frequency of hot extremes, heat waves, and heavy rainfall; Increased intensity of tropical cyclones; Declining water resources due to climate change in many semi-arid areas, such as the Mediterranean basin, the western United States, southern Africa, and northeastern Brazil; Possible removal of the Greenland ice sheet and a resulting contribution to sea level rise of about 7 meters. Without mitigation, future temperatures in Greenland would compare to levels estimated 125,000 years ago, when paleoclimate information suggests a 4–6 m sea level rise; About 20-30% of the species assessed so far are likely to be at increased risk of extinction if the increase in average global warming exceeds 1.5-2.5 degrees Celsius. Science leaves designers no room for inaction now. Designers urgently need to act and make sustainable reforms within their own fields of design. Research by the Emission Database for Global Atmospheric Research 2000 project provides a snapshot of annual global greenhouse gas emissions. Industrial processes 16.8%, residential, commercial and other sources 10.3%, transportation fuels 14%, waste disposal and treatment 3.4%; see Figure 2. These values ​​provide a snapshot of annual global greenhouse gas emissions for the year. Figure 1. The average temperature for the end of this century. Source: Robert Corell, Heinz Center. (After: Safe Climate Australia Prospectus, July 2009, page 11) The depletion of natural resources is faster than nature or humans can replenish; sea ​​level is rising as a result of melting ice caps and warming oceans; extreme temperatures are causing increased precipitation, tropical storms and cyclones; Pollutants are widely found in waterways, oceans, soil, and air due to unsustainable agricultural and manufacturing processes, greenhouse gas emissions, and overproduction [16]. In the absence of no sustainable action, the likely bell will lead to: Economies will falter. The success of many national economies is closely linked to their natural resources [26]; Possible disappearance of sea ice by the end of the 21st century; Experts estimate that climate change will force millions of people from their homes over the next fifty years, due to increased floods, fires, droughts and deadly heat waves; If the sea level were to rise by 1 meter, it is believed that most of the land would be under water in Bangladesh (population 162 million), Sri Lanka (20.2 million), Tuvalu, Nauru, Antarctic Peninsula, Maldives, Singapore, Carabian States, Papua New Guinea Islands, Micronesia, Kiribati, Indonesia, Samoa and Egypt. Many other countries will have the problem of their fresh water supplies becoming contaminated with salt water. According to the United States Environmental Protection Agency (EPA) [4], sea levels will continue to rise for several centuries, even if global temperatures stop rising by 2020; Figure 2. Greenhouse Gases by Sector. Source: "Image created by Robert A. Rohde / Global Warming Art" 3. DESIGN SUSTAINABILITY Design sustainability is setting in motion a process of transformative change towards an agreed sense of direction to respond to the circumstances in which users find themselves. design engineers. These directions come from the engineering design sector on a global scale [15]. It is through bad design decisions and their consequences that the environmental, cultural, economic and social future of people is being severely changed or taken away. Through design, people are becoming unsustainable. design 159

196engineers can no longer ignore the ramifications of their own design decisions. Most of the designed products, directly or indirectly, lead people to unsustainable lifestyles. It is crucial that design engineers learn to design for a sustainable future. 3.1 Domain of design Design responds to the world we live in by addressing three particular types of ecologies (biophysical, social, and ecology of mind) [12] that have been damaged by climate change and all have a relationship to design in both terms of how these ecologies became distorted and what must be addressed to achieve a viable future [6]. There are two important points to this: First, climate change is only one of the problems, and it is a problem that breeds other problems, so the problem facing humans is more serious than the implications of climate change. To put this in context, the November 2009 United Nations monitoring of emissions reductions indicated that global warming is progressing and greater than anticipated [32]. Using 1990 levels as a reference point, the temperature has risen 41 percent. The spectrum of global warming is organized from 1.8 to 7 degrees. Right now the earth is seeing a temperature rise of 7 degrees. So the way governments have been talking about working to reduce emissions to 5% is too low for what is required. To add to this, the speed of government action to reduce climate change has been incredibly slow in terms of the pace of the problem. The speed and acceleration of the problem on the one hand and the very slow response on the other is another way of characterizing the problem. Therefore there are two issues, the inherent or intrinsic problem created and the problem of responding to that situation in an inferior or inadequate way. 3.2 Biophysical ecology The first ecology is familiar, what people mean by biophysical ecology [6][8], but what has happened to a significant part by design is that humans have made the distinction between the natural and the artificial is impossible to distinguish. distinguish. For example, not even the water we consume is natural; with additives such as fluorine, chlorine and antibacterial chemicals. People depend on the artificial, since it has become indivisible from the natural. This has become one of the reasons why design has to be on an increasingly pronounced trajectory of importance in relation to the situation in which people find themselves. But it is more than a simple physical understanding of the complexity of one or the other, it is also a perceptual indivisibility between the natural and the artificial; in other words, people do not see the world naturally. People only see the world from what people have artificially learned. The relationship between what helps empirically and what helps perceptually has become an important part of the problem. 3.3 Social ecology The second ecology in which people live and exist is social ecology. The fact that people only exist by virtue of others and do not have the ability to exist as simply independent identities, so the social and the notion of community have a direct correlation with the ability to thrive and survive. There is a relationship between one and the other, so as biophysical ecology becomes critical there is a very strong possibility that social ecology will also become critical. Returning once again to climate change to illustrate the fact that by the end of the century it is quite possible that around 10% of the world's population will be displaced under the term climate refugees. To put this in perspective, tens of millions of people will become refugees. These people will not be completing forms to migrate to different countries. People will move and travel wherever they can in whatever circumstances they find themselves. The notion of border protection and immigration procedures will disappear entirely, but it will be replaced by a host of social disruptions and dysfunctions. The Australian Defense White Paper published on May 5, 2009 [1] indicated that there is an expectation that this problem will come to this country from the North. The government's short-term solution is to increase the size of the Australian Navy and start deploying more troops to the north of the country as a precaution. With the large number of ships expected to arrive, the government's solution does not seem likely. From this example one can see the way the problem is playing out and it is at odds with people's ability to consider the problem and address it. 3.4 Ecology of mind The last ecology, the ecology of mind, relates to design education. People exist in a particular way of thinking and that particular way of thinking has a direct relationship with their way of seeing. people see with their eyes (physically) as well as with their minds (interpretatively) existing reality (perceptually), and some see requirements and (with foresight) possible solutions for the future (author unknown). The eyes are simply instruments that facilitate the ability to see, but people see the result of what they know. If what people know is how to act in the world in destructive ways, then that way of thinking determines what they do. So a lot of the problems that designers have is that people still mistakenly think that the world is a place of infinite resources; when in reality resources are limited. People think in terms of being, enduring, and continuing as a species when in reality they are a finite species, and the more they mistreat the conditions on which they depend, the less time they have. Therefore, the biophysical, the social, and the ecology of mind are all inseparable. If you separate them, they are an explanation of the design domain, and not how you would normally understand design. 4. CONSEQUENCES Designers need to be able to face and approach design problems with awareness of time. To help understand the timing implications, the lifetime of CO 2 in the atmosphere is generally estimated to be 200 years, so no matter what humans do, they will be in this state for the next 200 years. CO 2 has always been in the atmosphere, it's the only way plant life on earth can get its carbon in order to grow. The industrial revolution wasn't really over yet. But since the start of the industrial revolution, the proportion of CO 2 in Earth's atmosphere has increased dramatically and continues to increase while absorbing plant life is depleted, and that's the important part of this real problem. The way the world's temperature is regulated is through a sub-thermostat process. The thermostat being the deep oceans of the earth, and in terms of temperature adjustment, it takes about years for the deep ocean to change a degree or two. Scientists have recognized that sea level rises are projected to continue to occur for 300 to 400 years. Therefore, the problems will not be solved in the near future. The situation people are in right now is that between 160

197There will be 1.8 degrees and 2 degrees of warming no matter what people do. This has already been determined by the damage that has already been done. If people keep doing what they're doing now, then there's a chance of a 7 degree rise. If the earth's temperature were to rise by 3 degrees, Australia would lose its Great Barrier Reef, flora and fauna, including some eucalyptus species, and many coastal properties would be flooded. The world would change dramatically. By the end of this century, much of this planet as people know it would be unrecognizable. Some areas of the earth will actually be more habitable than they are right now, but many more places will be dramatically worse. Designers have a very simple option, they can continue to design as they contribute to problems, many of which have come by design or may try to design in another way. In simplified terms, designers can be part of the solution or part of the problem; the decision comes down to choice. One can be paralyzed by the choice, or be stimulated and motivated; and even to some extent enthusiastic about the challenge. Designers need to change direction. Also included are people who have an impact on the design outcome, these are customers, managers, project partners, suppliers, project leaders, and lead designers, who can influence the final design outcome of the project. it becomes the seat and back supports of the Crusoe sofa. The fenders and frame can be deflated and packed for transport. Figure 3. The Crusoe sofa (refurbished pre-owned yacht fenders) designed by Alexander Lotersztain, Studio Derlot 5. DESIGN ENGINEERING SOLUTION Design engineers need to deal with the world they have already created. The design solution must be balanced with your problem and its effects on the environment and ultimately everyone. For example, people don't need a battery-powered shovel to pick up dog waste, and they don't need cars that get 17 miles per gallon or less (17 mpg is the average gas mileage for an average US car, EPA 2009) [4]. Sustainable architecture is good for the environment, but many companies do not deal with existing buildings and/or cities. It is imperative to be able to deal with what already exists, which does not mean that you never design anything new, but that you quantitatively design in relation to what is the biggest problem. The world's largest retailer, Wal Mart, has switched to sustainable packaging for its products through redesign. In 2007, the company identified USD 10 billion in savings from packaging efficiencies by making new sustainability decisions in the first two years after making the switch [25]. The sustainability changes involved reducing packaging waste by 5%. There were many factors associated with making the packaging more sustainable, including greenhouse gas reductions during packaging manufacturing, substrate material choices and chemical composition, the removal of PVC from its private label packaging, and the integration of recycled materials into new products. Packaging reduction across the entire packaging supply chain was designed through a 'cradle to gate' approach. In Figure 3, designer Lotersztain illustrates his belief that sustainable design has no limits. Used mainly to protect boats during mooring, reconditioned second hand yacht fenders have been transformed into a sofa. The design of it is functional but informed by a sense of environmental responsibility. It recycles the energy and resources already spent in the production of large nautical ships, in a new way. These marine fenders, designed for the most extreme weather conditions, become the seat and back of the sofa. They are supported by a recyclable stainless steel frame to support the fenders that Figure 4. The Hinkler bench (Moso bamboo) designed by Kent Gration, Wambamboo Figure 4 illustrates the Hinkler bench by designer Gration. Gration uses Moso bamboo in its designs to achieve environmental benefits due to its versatility, biodegradability and rapid 161

198renewable growth cycle. This species of bamboo has been used for centuries in construction, as a food source, and as an ornamental plant. Due to the growing global demand for environmentally friendly materials, it is now available in textiles, cross-laminated panels, veneers, plant-based polymers and a wide range of joinery products. Design engineering solutions, elimination design, and platforms are two different types of redirection practices for designing for sustainment [9]. 5.1 Elimination Design Elimination Design is learning to eliminate things by design, as well as designing things to exist. It is an interesting and often difficult exercise. Identify something that is a problem, something that is really causing harm, and then find a way to get rid of it. In terms of creative challenge, this exercise is the most difficult, time-consuming and effort-consuming area within a design process; however, despite the complexity, that is, the developed ability to undertake disposal design is a highly sought after area. Over-engineering is one example where elimination design can be used to design things and reduce environmental impact. The result of over-engineering has been a consequence of high-end and specialty market acceptance that has resulted in product designs that are more complicated and consume more resources in terms of productivity and materials than necessary. In many cases of design engineering, less is more. Keeping a simple design is defined by the French architect and aviator Antoine de Saint-Exupéry, who stated that in anything, perfection is finally achieved not when there is nothing to add, but when there is nothing to take away. [22]. 5.2 Platforming Platforming is recognizing that, as designers, companies find themselves in a situation where people simply cannot stop designing the way they previously did and suddenly change direction [18]. A company can't let things crash around them as they transition from where they are now to where they want to be. By building a whole range of different types of platforms, a company produces items that it doesn't need to produce, but also needs to survive. So a platform is a way that a corporation begins to build its future within itself and then puts its platform, so to speak, in competition with itself. Hence, the company must not stop serving the design market that someone created, but must recognize that things have to change. Honda was one of the first engine manufacturers to use Platform Design as the foundation of sustainable responsibility in its design development process to become the industry leader in the range of power packages. Honda realized in the early 1960s that traditional carbureted two-stroke engines were responsible for an alarming rate of emissions and contributed significantly to pollution [14]; therefore, the company adopted a policy of building only four-stroke engines. Honda began designing and manufacturing 4-stroke engines while continuing production of the 2-stroke range of engines. Today, its product range of lawn mowers, line trimmers, brushcutters, generators, outboard motors, snow throwers, and other power equipment use ultra-low emission 4-stroke engines. Today, Honda management strongly supports government legislation to ban 2-stroke engines. Also, despite being known as the world's largest manufacturer of internal combustion engines, Honda has never built a V8 for passenger cars, considering gasoline consumption and efficiency. 6. HISTORICAL CONDITION In the 1930s, design engineering was introduced in the United States to accelerate consumption and try to increase spending to cover the economic crisis that was experienced at that time [19]. This became the basis of modern consumerism and is inseparable from the trajectory of globalization. People can link a large part of the consumer society to that moment. It represents separate destruction rather than simply a proliferation of consumption. People have managed to find incredibly alluring and efficient ways to snatch the future by design. What is required to do is learn to do the opposite. People need to bring the future into existence as something that has viability, recognizing that the future is not a void in front of them that people are traveling towards. On the contrary, it is rather this that has thrown so much of the past inside. The way people travel into the future is by negotiating their way through everything that already exists within it. One can only do that by design; people cannot get to the future by accident. As for putting on a sustainable design education platform, one has to see today's events as an opportunity. This is why leadership and opportunity should be sought, because if you truly embrace design, practicing designers and design educators have the position and responsibility to bring about a change in direction that will make a significant difference. Taking advantage of that opportunity is difficult, but it has become a necessity. 7. THE CHALLENGE It is much more difficult to change the thoughts of a design engineer than it is to educate an engineer/designer in his early years of education. Good design contributes to the possibility of a viable future; bad design is what takes it away. Many things in the past were classified as good design but did not perform well in terms of what they delivered, be it environmentally, socially or economically. 8. CONCLUSION To become an ethical design engineer, one must be responsible for the objects he creates, whether they are industrial, architectural, mechanical, civil, or electrical. Also, being a design engineer is not about being trendy; It's also not about being seen as creative or being a problem solver; it is about being responsible for what one brings into existence. In contrast, although in most cases the client designates what is to be designed, the engineer bears a significant share of responsibility for the overall effect of the designed product or project on the environment, its consumers, and the world at large. general. In simple terms, designing ethically means taking responsibility for form and function that minimize the use of natural resources and prevent or minimize pollution and environmental damage. An ethical design can be evaluated according to its degree of sustainability; which also implies the elimination of products that are not sustainable. Instead of creating more green stuff that simply adds to consumer choice, products can be eliminated by design [10]. Designers have the skills, the resources and the reasons to deal with sustainability; therefore, they need the will to act. 162

199REFERENCES [1] Australian Government, Defense White Paper 2009: Defending Australia in the Asia Pacific Century: Strength 2030, Department of Defence, Commonwealth of Australia, [2] A. Chochinov, 1000 Words: Manifesto for Sustainability in design, World Changing Team, New York, [3] Rt. Hon C. Clark, MP., Education and Skills Sustainable Development Action Plan, UK, [4] EPA., Environmental Protection Agency of United States, [5] A. Findeli, Rethinking Design Education for the 21st Century: Theoretical, Methodological, and Ethical Discussion. Design Problems, Vol. 19, No. 1, [6] J. Fischer, A.D. Manning, W. Steffen, D.B. Rose, K. Daniell, et.al, Mind the Sustainability Gap, Trends in Ecology & Evolution, vol. 22, edition. 12, 2007, pp [7] K. T. Fletcher, & E. L. Dewberry, Demi, A Case Study in Design for Sustainability, International Journal of Sustainability in Higher Education, vol. 3, No. 1, [8] T. Fry, Dead Institution Walking: The University, Crisis, Design & Remaking, Design Philosophy Papers, number 5, Australia, [9] T. Fry, Design Futureing: Sustainability, Ethics and New Practices. UNSW Press Book, Sydney, 2009b. [10] T. Fry, Elimination by Design, Design Philosophy Papers, Issue 2, Australia, [11] T. Fry, Innovation Cities, Future Tense, ABC Radio National, August 20, Brisbane, 2009a, Transcript. [12] T. Fry, Reworkings: Ecology, Design, Philosophy, Envirobook, Sydney, [13] H.M. Government, Securing the Future: Implementing the UK Sustainable Development Strategy, 2005, Executive Summary. [14] Honda Motor Company Limited, Honda Ecology: Honda Environmental Conservation Activities, Japan, [15] International Association of Universities (IAU), IAU Priority Issues: Sustainable Development, retrieved February 2, 2010 from [16] IPCC , Climate change 2001: The scientific basis. Contribution of Working Group I to the Third Assessment Report of the Intergovernmental Panel on Climate Change [J.T.,Y. Houghton, DJ Ding, M. Griggs, P. J. Noguer, X. van der Linden, K. Dai, Maskell, and C. A. Johnson (eds.)]. Cambridge University Press, Cambridge, UK & New York, NY, USA, 2001, pp. [17] C. Laszlo, Sustainable Value: How the World's Leading Companies Are Getting It Right by Doing Good, Stanford Business Books, [18] Z Liu, Y. S. Wong, and K. S. Lee, A Manufacturing-Oriented Approach for Multi-Platforming Product Family Design with Modified Genetic Algorithm, Journal of Intelligent Manufacture, published online December 11, [19] D. F. Noble, America by Design: Science, Technology, and the Rise of Corporate Capitalism, Oxford University Press, [20] R. K. Pachauri , Summit on Climate Change. National statement by the chair of the Intergovernmental Panel on Climate Change (IPCC), a pre-recorded video for Conference of the Parties (COP) 15, United Nations Climate Change Conference hosted by Denmark in Copenhagen, video taken on September 22 . [21] V. J. Papanek, Design for the Real World: Human Ecology and Social Change, Thames & Hudson, London, [22] J. Porter, Five Principles of Design: A Guide to Creating Designs That Work. Techsoup, August 8, 2007. Retrieved February 2, 2010 from cfm. [23] L. L. Rasmussen, Summit on Climate Change. National statement by the Prime Minister of Denmark, a pre-recorded video for Conference of the Parties (COP) 15, United Nations Climate Change Conference hosted by Denmark in Copenhagen, 2009, video taken on September 9. [24] D. Rebelo, what is the role of higher education institutions in the United Nations Decade of Education for Sustainable Development? International Conference on Education for a Sustainable Future: Shaping the Practical Role of Higher Education for Sustainable Development, International Association of Universities (IAU), Charles University (IAU), Karolinum, Prague, Czech Republic, September, [25] L Scott , Better Sustainment, Live Better Sustainment Summit, Bentonville, Arkansas, USA, 10 October, [26] A. Smith, What the Finniston Report Should Have Said, Engineering Designer - Journal of the Institution of Engineering Designers, United Kingdom, vol. 7 No. 3, 1981, pp [27] A. Telemi, Summit on Climate Change. National Statement by the Prime Minister of Tuvalu, a pre-recorded video for Conference of the Parties (COP) 15, United Nations Climate Change Conference hosted by Denmark in Copenhagen. Video taken on September 22, [28] N. M. Triet, Summit on Climate Change. National Statement of the President of the Socialist Republic of Vietnam, a pre-recorded video for Conference of the Parties (COP) 15, United Nations Climate Change Conference hosted by Denmark in Copenhagen. Video taken on September 22, [29] UNESCO, United Nations Decade of Education for Sustainable Development, The DEDS at a Glance, France, [30] UNESCO-UNEP, Environmentally Educated Tachers: The Priority of Priorities. Connect, Vol. XV, No. 1, 1990, pp [31] United Nations, Process for the Development of the Environmental Perspective to the Year 2000 and Beyond. Resolution of the General Assembly, vol. 38, No. 161, December 19, [32] United Nations, World Population to 2300, United Nations, New York, [33] S. Van der Ryn & S. Cowan, Ecological Design, Washington, DC, Island Press, [ 34] Y. Yang & J. Giard, Industrial Design Education for Sustainability: Structural Elements and Pedagogical Solutions. Paper presented at the Industrial Designers Society of America (IDSA) National Education Conference, Massachusetts College of Art, Boston,

200Integrated and data-driven engineering for virtual prototyping Stephan Vornholt Otto-von-Gurericke University Magdeburg Magdeburg, Germany and Veit Köppen Otto-von-Gurericke University Magdeburg Magdeburg, Germany ABSTRACT The increasing use of components in automotive systems is accompanied by complex product dependencies as well as engineering and economic domains. Therefore, specialized analysis approaches and simulations are used to identify the influences of domains and components on each other. Current research deals with database systems and data management approaches to integrate different engineering domains in a component-oriented model. On the other hand, integrative management solutions made great strides in the field of computer-controlled economic analysis and optimization. ERP systems and data warehousing approaches enable a wide range of data warehousing and analytics solutions. A challenging task is the interaction and influence of different engineering and management disciplines on each other. This article presents a comprehensive solution for data-driven and embedded engineering of prototypes in the automotive domain. This approach allows for early updates of concepts and structures, as well as recombination of virtual products, including the planning process. Based on the description of the challenges in the field of integration of domain experts, an integrative architecture is presented. 1. INTRODUCTION The increasing use of software-based approaches in automotive systems is made possible by support in the fields of computer applications and further research on control systems for complex dependencies. These allow testing and analysis on virtual prototypes before implementing and manufacturing real prototypes. A challenging task is the interaction of different components, as well as the influence of different engineering and management disciplines on each other. On the engineering side, CAx techniques and concurrent virtual/digital engineering approaches enable the engineering of complex products, taking into account, for example, the design-build and analysis-simulation phases. On the other hand, data-driven management systems made great strides in the field of computer-controlled parts lists, cost control, and delivery optimization. The interaction of both research areas promises a faster and cheaper design phase, as well as continued open decisions. To support this goal, an integration of both domains, management information systems and concurrent engineering, is necessary. The early adaptation of ideas and concepts is enhanced by Virtual Engineering (VE) in the field of mechanical engineering solutions, giving rise to a large number of variants. Many influences must be considered and tested, including the market. Business issues influence product redesign, just as the design process itself can influence business constraints. Therefore, active cross-linking of tools and production partners is required. Models and strategies may change and necessitate a redesign of a prototype. This approach allows for early updates of concepts and structures, as well as recombination of virtual products, including the planning process. EXAMPLE The simplified development of an automotive system, called CAR, is presented as an exemplary process that follows the product life cycle, see Figure 1. Following the definition of the basic objective, a shared concept is developed, where the constraints are defined. and the conceptual design is more or less specified. Then, in the design-build phase, engineers iteratively develop simulation models and virtual prototypes that are analyzed and, if the result meets requirements, sent to process planning and production, as well as to economic and market analysis. The white arrows represent the desired steps to improve the model to production, while the black arrows illustrate possible re-engineering decisions. 164

201Figure 1: Life Cycle The design itself focuses on geometry and connections, as well as mechanical systems, electrical drives, and control components. A designer of elementary geometry might assume the structure: one body, two-axle constructions, having one axle, two wheels, and two connecting elements, see Figure 2a. The connection between different elements is also defined. Many mechanical definitions can be used, such as weight, volume or material properties, as well as predefined connection types between elements, see Figure 2b. Figure 2a: CAR Example (3D CAD) Figure 2b: Common CAR Concept According to our example, the following domains cooperate in this process: Electrical Design: The engineer designs the system's electrical drives and electrical power management. Information from conceptual design, geometry and product libraries, eg engine databases, is combined. Simulation and analysis: the effects are simulated with models. The simulation results are used to identify weaknesses or deficiencies and provide information for further improvements. The backbone of product analysis is typically the following domains: Mechanical analysis, for example, SimMechanics: Mechanical simulation models are used to analyze the mechanical behavior of prototypes, for example, kinematic behavior, testing for potential collisions, and allowed movements. Mechatronic analysis, e.g. Modelica described in [4]: ​​The mechatronic model adds electrical drive and control components to the model of the mechanism. Finite Element Analysis (FE) (ANSYS & NASTRAN): FE models are used to analyze the elastic behavior of certain parts of the system. Typical FE analysis tasks are the calculation of the resonant frequency or bending stress [6]. After these technical analyses, several tests of economic management arise, that is, marketing, production planning, cost and inventory. The product can be chosen to build, redesigned with new or adapted constraints, again including economic and technical decisions. Depending on the suggested changes, the engineering process is restarted. 2. RELATED WORK The combination of approaches in virtual engineering (VE) and enterprise resource planning (ERP) systems and an integrative architecture, which combines the positive effects of new database approaches for VE and ERP systems, requires a deeper insight into systems. Virtual engineering The product development process involves various experts in specialized domains with their own vocabularies, knowledge, and tasks. Thus, each expert uses domain tools and attached information management systems, but all work on the same conceptual design and exchange information with other groups. The interaction and heterogeneity of the data models and information systems used lead to interoperability or data integration issues and management issues within product development processes. VE aims for computer-aided parallelization of design and construction, as well as simulation and analysis to reduce product development time and cost. A virtual prototype (VP) represents a computer-based prototype of a real-world artifact. It can be tested from different points of view. Both the VP and the tests can be viewed in Virtual Reality (VR). Therefore, VPs understand and combine all information, for example CAD geometric designs, product data, behavior models based on FE analysis or mechanism models and special geometric models for testing in VR. Synthesis steps, eg parameterization and analysis steps alternate during product development. To shorten and improve the development process, existing and verified designs, simulation models, and virtual reality scenarios must be reused, modified, and recombined for new developments. Various solutions, including integration into a common data structure eg presented in [1, 2], information transfer eg in [3], integration into a common tool and storage system [5 ] are at the center of the investigation. . All approaches lack information sharing or specialized vision or redesign. ERP ERP is an enterprise-wide computer software system used to manage and coordinate all resources, information, and 165

202functions of an enterprise from shared data stores. ERP software is a recent addition to manufacturing and information systems that has been designed to capture and organize the flow of data for the entire product life cycle. ERP software attempts to link all of the company's internal processes into a common set of applications that share a common database. It is the common database that allows an ERP system to serve as the source for a robust data warehouse (DW) that can support sophisticated analysis and decision support. ERP software typically has a central database as its hub, allowing applications to share and reuse data more efficiently than separate applications previously allowed. The database of an ERP system is functionally organized or process oriented [14]. In an ERP system, data cannot be directly analyzed efficiently, because ERP uses OLTP (Online Transaction Processing) to handle the data. OLTP is a class of software that facilitates and manages transaction-oriented applications, typically for data entry and retrieval drives. Data can be exchanged between disparate systems, especially disparate CAD systems by Step or related solutions like MechaStep or IGES. To store design data, it must first be archived within the organization that produced the part. Today, many large corporations have archived data using a CAD/CAM format that is no longer supported by any vendor. Existing database approaches, such as constraint databases, for archiving design data in neutral interchange formats are being investigated. To avoid dominance of geometry, embedded design could be organized around an embedded product model that manages design information. Current approaches generally assume that integrated design requires a single integrated model. Typical solutions are described in [7, 8]. Product Data Management (PDM) systems are used to integrate into a common environment within the product life cycle. PDM is a tool that helps engineers manage engineering data and product development processes. Since PDM systems are widely used to reduce product development time, they need to exchange product data with CAD systems. It is necessary to integrate the CAD and PDM systems, because the CAD systems generate the product data and the PDM systems manage this data. Product structure data management is the primary function of a PDM system. PDMs should enable engineers and other users to search for design models and reuse business knowledge and best practices by combining artificial intelligence techniques such as neural networks and expert systems with object-oriented CAD and databases. Comprehensive solutions, which integrate different disciplines, are only researched and implemented for individual companies. These solutions are not usable for engineering clusters with many participating companies. Although the list of solutions in the disciplines themselves is long, ie product development, economic solutions, data exchange, an integrated approach is lacking. An open architecture that integrates heterogeneous systems in the same database or a common database schema is necessary as a solution for the control and exchange of concurrent information and will be presented in the next section. 3. VE AND ERP INTEGRATION This section describes the integration of both views of the engineering process. This solves, on the one hand, the challenges addressed and, on the other hand, further improves the development process, due to the use of all available information in the entire product life cycle. Challenges Although various data-driven solution systems exist in engineering and management disciplines, where in each discipline itself the integration of different domains is in the focus of current research, data management of both is not yet one of these solutions. The result is that both disciplines work on the same product with high concurrency and data redundancy, but the interaction between both design lines is limited to one input and one output. The other discipline is seen as a black box. Conceptual Design Conceptual Design Design-Build Design-Build Analysis (Technical) Analysis (Economics) Analysis (Technical) Productplaning Analysis (Economics) Productplaning Time Saving Data Dependencies and Sharing Figure 3: Concurrent Engineering and Management By integrating both approaches on common data scheme, the main tasks of each domain should not be changed. Both need a complete and convenient data warehouse with metadata management, data integrity, version management, as well as combined multi-user interaction, consistency control, and specialized views. The requirements for a new system are outlined below to improve engineering and management at the same time, see Figure 3. The integrated data schema allows for simultaneous work processes, i.e. technical and economic analysis, where engineering steps they can use models at work to estimate their results. One of the main challenges for databases is the (re)definition of views. Since each cooperative partner has its own data definition tools and language, as well as a concept of required information, the integrated data for a product must be separated in many ways. These can be combined, for example, the parts list, where the products for the mechanical and electrical engineer are analyzed and enriched with additional information stored in databases, as well as cost lists of PDM systems. the 166

203The user's view of the data is the same as if he were working alone. Comprehensive and convenient data storage with metadata management is essential. This means that different tools are used and it is not possible to have integration with manual redefinition of data and connections. Most of the work needs to be done automatically and all data needs to be storable and its metadata managed. The information stored in the new data model should influence design decisions. Therefore, dependencies and pipelines have to be defined and used. Furthermore, those dependencies between the information must be kept in a consistent state. An electrical engineer designing a connection between two elements that are not connected in a mechanical model is inconsistent, and engineers cannot ignore the constraints given by the design concept. One of the most complicated tasks is the integration of different users, working on the same model with their own view, storing and updating the model at the same time. These changes must be persistently integrated. Especially in heterogeneous production networks, consisting of many companies, the possibility of protecting someone's information from access is necessary. However, data security issues are necessary in many ways, even for every designer, who wants to work without risk of changes in his area. This means that needing an inconsistent/incompatible data warning instead of an automatic switch routine is an essential requirement. An adaptive scheme in VE To build an adaptive scheme, both parts and their integration scheme should be considered: In our scenario, three general data models are used: a CAD data model, a data model for mechatronic systems, and a of FE data. CAD systems typically use a hierarchical, parameterizable, feature-based data model. Construction is organized into assemblies, subassemblies, and parts. The properties of assemblies and parts are called features and parameters. A characteristic of a part is its volume, that is, its geometry. Additional features are, for example, material or surface specifications. Assemblies group parts or other assemblies and assign positions to them as parameter values. The mechatronic data model consists of the mechanical model and the electrical model. All components have parameters, eg inertia, center of gravity, mass. Components are classified into bodies and motors, while they are connected by ports. A port has a defined position on a component. The FE model is based on a mesh model of the possible simplified original geometry. This geometry can be derived from the CAD system, but it can also be a simplified abstract geometry, eg 1D or 2D. The mesh model consists of mesh entities that are distinguished into elements, faces, edges, and nodes. Parameters are assigned to elements to describe materials, motion constraints, and masses. Connections and hierarchies in models are expressed differently and denote different concepts of the real world: for example, in the CAD model, a hierarchy means a construction hierarchy, and in the mechanical model, the connection corresponds to kinematic dependencies. Therefore, simple 1:1 correspondences between data model elements are only partially possible. Often complex conversions are required that also take real models and instances into account. Most ERP models are either customer or product based, but are just splintered collections of accessible data with no connections or cross-functionality. Therefore, the character of data models is in the structure, the information area and the data model are difficult to characterize. To overcome this, current research is focused on the integration of different ERP solutions in a DW and also on the question why previous research in this area is not used in current ERP systems. The integration of ERP systems in this document assumes a DW, where different ERP solution databases are integrated into a common data structure that is further developed and extended by the integration database tasks. Integration architecture and schema The integration architecture can be divided into three basic components: the ERP data warehouse, the VE integrating DW, and the multiple database. The ERP data warehouse consists of a basic structure, which contains product-based information about products or parts of products. The UID (unit identification) represents the link to different material databases or parts lists. Further information can be stored, for example about the cost, as well as customer information or delivery information. The information schema in the data warehouse is integrative, which means that different tools are tested for consistency and can change data that is already defined. An adjustable feedback feature translates relevant data changes to each related system, or simply adds a new version with a warning for all inconsistent data variants. In addition, an adaptive view feature is included. The VE database is based on the solutions of the component-based virtual family definition in [9, 10]. Here, information closely related to technical descriptions is managed and kept consistent. Feedback, feedback, and consistency are also built in. Database Multi integrates both approaches in one schema, as shown in Figure 4. Figure 4: Database Integration (Basic Idea) 167

204The Multi database is based on a component-oriented model that describes VPs in a multidisciplinary way. Figure 5 illustrates the main concepts of the model using UML notation. A domain model describes the behavior, form, or function of a VP in an engineering domain, for example, the mechanical behavior model or the CAD model. A domain model is made up of submodels. Since ERP data models are combined and often not closely related to a basic model, they can be mapped directly to the component. Each model provides a set of parameters and ports. Parameters are quantities that describe the characteristics of the model and the characteristics of a VP in a given domain. A port is a connection point where models can be combined. Signals, material and forces are transferred via connected ports. The libraries are integrated into the ERP data model as well as common ERP data with its specific constraints and processes. In summary, a model is represented by a tuple of 3 M = (id, parameters, ports) while ERP models are represented by ERPM= (id, constraints, processes, parameter). Figure 6 illustrates our exemplary CAR component. The component contains two domain models, a CAD model and a combined ERP model. A set of dependencies describes the internal relationships between domain models. The external interface offers ports and parameters that are mapped internally to domain models. The component can be distributed and instantiated in a coupled fashion, allowing it to be used directly in CAD and ERP models. Component 123: CAD CAR C 1 I Body density volume area density, volume, area Component 4: ERP CAR Body cost force C 2 I Connection Body Axis frame (x) frame (y) frame-x, frame-y Connection Body Axis quality cost Component3 I Axis density volume density, volume Axis Bending stress cost density volume mass cost quality Process state Figure 6: CAR Construction Component Figure 5: UML Schema Each domain model and ERP model represents a view of a VP and does not support domain mappings. Therefore, components that are hierarchically organized are introduced. A component represents a conceptual part of a VP and encapsulates all the models as well as their dependencies for this artifact. A component ensures dependencies between different domains, while domain models combine sub-models within a domain. Additionally, components can contain subcomponents. Components provide interfaces for communication and parameterization. An interface consists of parameters and ports. Constraints and assignments ensure consistency control within a component. Combining the concepts, a component is defined as a tuple C = (id, M, C, Pdept, PMdept, I) with an identifier id, M and C are model sets and encapsulated components, respectively. Pdept represents a set of port mappings between different domain models and PMdept is a set of parameter dependencies. Constraints and processes can be translated into dependencies or parameters. Finally, the interface I, which consists of a set of ports and external parameters, describes the behavior of the component with the environment. The interface is mapped to internal models and components. Views: Information overload occurs if too many details are used at the same time in the development process. The views filter the information relevant to each client, be it the engineer or the manager or initiator. Views can be integrated into the schema and processes as illustrated in Figure 7. The arrows illustrate the combination and integration of data representations, as well as the influence of changed properties or parameters. Based on the (meta)data that is stored in files and folders, the embedded data is stored in a component-based structure. Inconsistent information is removed and libraries as well as files are linked in the (meta)data repository. The global view contains all the parameters and ports in each file and can be accessed directly. The economic interface contains all the relevant DW information about the ERP data and hides the technical information. The technical interface contains virtual engineering technical data. Commercial and economic information are not visible. Another defined view is the combined interface, which contains information about both or information needed in both disciplines, as well as new views about the data. Views are defined and scheduled to be compatible with an adaptive view selection where dependencies are defined. Therefore, any combination, new views and approaches are possible. 168

205Figure 7: Views 4. Conclusion and perspective In this paper we present data-driven and integrated engineering of prototypes in the product development domain, illustrated with an automotive example. Based on the description of a virtual engineering integration solution, the integration of the ERP data warehouse into a common framework is the focus of this article. The new integrative architecture is based on the challenges in the field of domain expert integration. In particular, the view concept allows for different approaches to using and specifying the integrating data, as well as dependencies and other control options. The next steps in this field are the integration of different ERP systems into a data DW and the implementation of the system. More steps and areas of integration, such as logistics or training, are planned in the future. Acknowledgments Stephan Vornholt's work is supported by the European Commission: European Regional Development Fund, COMO C and C Veit Köppen's work is funded by the German Ministry of Education and Science (BMBF), project 01IM08003C. References [1] Fenves SJ, Foufou S, Bock C, Sudarsan R, Bouillon N, Sriram RD, CPM 2: A Revised Commodity Model to Represent Design Information Technical Report, NISTIR 7185, [2] Xue D. , Yang H, A database representation model of concurrent engineering oriented design. Computer Aided Design, 36: , [3] Juhasz T, Schmucker U, Automatic Model Conversion to Modelica for Dymola-Based Mechatronic Simulation. In: Bachmann B (ed.) Proceedings of Modelica 2008, The Modelica Association, Bielefeld, p , [4] Modelica Association, Modelica - A Unified Object-Oriented Language for Physical Systems Modeling -- Language Specification -- Version 3.0, [5 ] ] Bettaieb S, Noel F, A generic architecture for synchronizing design models emitted from heterogeneous business tools: Towards greater interoperability between design experiences. In: Engineering with computers, [6] Armstrong CG, Modeling requirements for finite element analysis, Computer Aided Design 26(7): , [7] ISO, ISO 10303: Industrial automation systems and integration: Representation and interchange Product Data, [8] Pavez L., STEP Datenmodelle zur Simulation mechatronischer Systeme. Abschlussbericht des Verbundprojekts MechaSTEP, Forschungszentrum Karlsruhe GmbH, [9] Vornholt S, Geist I, Flexible Integration Model for Virtual Prototyping Families. In: Proceedings of PLM08 5th International Conference on Product Lifecycle Management, Seoul, Korea, [10] Vornholt S, Geist I, Interface for Multidisciplinary Virtual Prototyping Components. In: 19th International Conference on Database and Expert Systems Application (DEXA), 1st International Workshop on Data Management in Virtual Engineering DMVE '08, [11] Esteves J., Pastor J., Enterprise Resource Planning Systems Research: A Bibliography commented. In: AIS Communication 7(8), [12] Silwood Technology Ltd, Saphir White Paper: Managing Metadata for Enterprise Applications. Handbook, [13] CA, IT Management Transformation, Technology Brief: CA ERwin Saphir Option. Manual, [14] Simon, Elke, Implementation of an enterprise resource planning system with a focus on end-user training, Hamburg: Diplomica Verlag,

206Fault Detection and Isolation for a Bus Suspension Model Using an Unknown Input Observer Design Juan Anzurez Marin, Luis A. Torres Graduate Studies Division of the Faculty of Engineering C.P , México Abstract This article presents a Schematic of Failure Diagnosis based on the design of unknown input observers. paper. The technique used is a model-based approach, the main task being the observation of error signals known as residuals. These robust residuals are derived by comparing the output of the system and the estimated output. Robust residuals are generated by the Unknown Input Observer (UIO) design, which has its asymptotically near-zero state estimation error vector as its special feature, regardless of the presence of disturbances or unknown inputs in the system. In this work we use the 1⁄4 suspension of a standard bus to test the design of Observers of Unknown Inputs applied to the problem of diagnosing faults in sensors. Keywords: Troubleshooting, Unknown input observers, Sensor failure. 1. INTRODUCTION Modern control systems are becoming more complex and control algorithms are becoming more sophisticated. Consequently, the issues of availability, reliability and operational safety are of great importance. For safety critical systems, the consequences of failures can be extremely severe in terms of human mortality and economic losses. Therefore, there is a growing need for monitoring and fault diagnosis analysis to increase the reliability of such safety-critical systems due to early indications as to which faults are developing and can help prevent system failure, abort of the mission and disasters. Since the early 1970s, troubleshooting research has gained increasing consideration throughout the world, both in theory and in application, which have been made feasible by advances in computer technology [ 1-3]. The fault diagnosis process basically consists of three levels; The first level is fault detection, which indicates when a fault has occurred in the system. The second level corresponds to fault isolation, where the location of the fault is determined. The third and final level is fault identification, which estimates the size and type or nature of the fault. For this reason, this process is also known as fault detection and isolation (FDI) [8], [9], [10]. The model-based FDI approach requires a system model, which is an idealized assumption. In practice this assumption is not fully met. Usually the system parameters are uncertain or vary over time [6]. However, the design of the Unknown Input Observer (UIO), in addition to the uncertainties of the modeling system, solves the FDI problem effectively. This technique surpasses the classic approach of hardware redundancy by a software redundancy, with an obvious cost-effectiveness benefit. The UIO design requires, in addition to the system model, also detectable output stages. The residual is obtained by comparing the actual output of the system and the estimated output of the observers. The purpose of UIO s is to produce an estimated output that is asymptotically close to the actual state while rejecting the effects of noise and system modeling errors. In this approach, the disturbances must be decoupled from the generated residuals. This is achieved by assuming that the matrix of unknown inputs (perturbations) is known. Based on this information, the disturbances can be decoupled. In this work, to analyze the sensor failure detection problem, we propose to combine the advantages of the UIO proposed by Chen [1], which has the ability to decouple disturbances, with the advantages of dedicated observers to achieve adequate isolation, applied to a bus. suspension, since it is a very important system in the automotive sector. The article is organized as follows: Section 2 shows some basic concepts about unknown input watchers. Section 3 presents the bus suspension model that we use in this work as a case study. Section 4 describes the results obtained using the unknown input observers applied to a bus suspension system. Finally, in section 5 we present some conclusions about the technique used for the fault diagnosis problem. 2. DESIGN OF THE OBSERVER OF UNKNOWN INPUT In the class of system considered in this case, the uncertainties can be summarized as an additive term in the dynamical equation of the state space as: x ( t) Ax( t) Bu( t) Ed ( t) y( 2) t) Cx(t)(1)

207n Where x( t) is the state vector, y( t) the output vector, u( t) r m is the known input vector, and q d( t) is the unknown input or disturbance vector. A, B, C, and E are known matrices with appropriate dimensions. Then the residual is obtained as follows: r( t) y( t) Cxˆ ( t) (2) Therefore, an observer is UIO defined for system (1) if its state estimation error vector e (t) tends to zero asymptotically, regardless of the presence of the unknown input (disturbance) in the system. This article presents a complete order UIO scheme proposed by Chen and Patton [1]. The structure for a complete order observer is described as: z ( t) Fz( t) TBu( t) Ky( t) (3) xˆ( t) z( t) Hy( t) n n Where x( t) is the state vector, z(t) is the state of this complete order observer, and F, T, K, H are matrices to be designed to achieve the unknown input decoupling and other design requirements. The observer described by equation (3) is illustrated in Figure 1. Figure 1. Structure of the unknown input observer When the observer (3) is applied to the system (1), the estimation error (4) is governed by the equation (5). e( t) x( t) xˆ ( t) (4) e ( t) ( A HCA K C) e( t) (5) [ F ( A HCA K C)] z( t) [ K2 ( A HCA K1C) H] y( t) [ T ( I HC)] Bu( t) ( HC I ) Ed ( t) 1 1 Where: K K K (6) 1 2 If the following relations hold: ( HC I) E 0 T I HC F A HCA K C K 2 1 FH The state estimation error will then be: e ( t ) Fe( t ) (8) If all the eigenvalues ​​of F are stable, e( t ) will tend to zero asymptotically, that is, ˆx x . This makes the observer (3) the UIO for the system (1). In order to meet the design requirements presented above and obtain the matrices to achieve the unknown input decoupling, the following simple algorithm should be followed: 1. Check the range condition for E and CE: if range( CE) range( E) , an UIO does not exist, go to Calculate H, T and A 1 : T 1 T H E[( CE) CE] ( CE) (9) T I HC (10) A TA (11) 1 3. Check observability: Yes ( C, A 1) is observable, an UIO exists, and K 1 can be computed using pole placement, go to Construct a transformation matrix T for observable canonical decomposition: To select the independent n1 rank ( W O ) ( W is the observability matrix of O T ( C, A 1) ) row vector,..., T t1 t, of n W, together with other O T n n 1 row vector T tn 1 tn to construct a non-singular matrix like: T [ you,..., you ; t T,..., t ] (12) 1 no no 1 n 5. Perform an observable canonical decomposition on ( C, A ) : 1 A 0 1 or TAT 1 A12 A (13) no 0 (14) 1 CT C or 6. Check the detectability of (C, A 1): if any of the eigenvalues ​​of A are unstable, no UIO exists, go to no Select n desirable eigenvalues ​​and assign them to 1 1 A K C using pole placement . o t o 8. Compute 1 K T K 1 t T 2 T T t t T ( K ) ( K ), 2 where K can be any matrix t ( n n1 ) m. 9. Calculate F and K: F A K C (15) 1 1 K K1 K2 K1 FH (16) 10. STOP. To detect a particular fault, an isolation scheme must be used; in this article we use a dedicated observer scheme. It is possible for a system to present faults in both (7) 171

208actuators and sensors. However, in this work only the problem of detection of failures occurred in the sensors is addressed. The isolation scheme for this purpose assumes that all actuators are fault free, so the system equations can be written as: x ( t) Ax( t) Bu( t) Ed( t) j j j y ( t) C x( t) fs ( t ) y j ( t) c j x( t) fsj ( t) For j 1, 2,..., m Where 1n c j is row j (17) of matrix C, j ( m1) n C is derived from matrix C by removing the j-th row c, y (t) is the j-th component of y(t) and y (t) j j j (m1) is derived from vector ( ) the j-th component and. j and t without considering j f represents the failure in the sensor s j. Consequently, the residual generator based on m UIO can be constructed as: 3. BUS SUSPENSION MODEL The simulation presented in this paper is based on a system dynamics of a ¼ bus suspension model. The diagram of the model is shown in figure 3 and is described by the following equations [4]. M 1x1 b1 ( x 1 x 2) k1( x1 x2) M 2x2 b1 ( x 1 x 2) k1( x1 x2 ) b2 ( W x 2) k2( W x2) (21) Where M 1 and M 2 are fields mass and suspension mass respectively, k 1 is the spring constant of the suspension system, k 2 is the spring constant of the wheel and tire, b 1 is the damping constant of the suspension system, b 2 the constant of wheel and tire damping, x 1 and x 2 are distances and W is the reference for any road alteration. j j j j j j z ( t) F z ( t) T Bu( t) K y ( t) j j j j j j r ( t) ( i C H ) y ( t) C z ( t) For j 1,2,..., m (18) Where the matrices must satisfy the following equations: j j j j H C E E j j j T I H C j j j j (19) F T A K1 C j j j K2 F H j j j K K1 K 2 For j 1,2,..., m Each residual generator is driven by all inputs and only one exit; this is shown in figure 2. When all actuators are fault free and a fault occurs on the jth sensor, the residual satisfies the following isolation logic: j j r ( t) T SFI (20) For j 1,2 ,..., m Where j T are the isolation threshold. SFI Figure 3. Bus suspension model ¼ For the UIO design procedure, it is necessary to have the statespace system model and assuming that there is a plant model (bus suspension) whose matrices are shown below: b1b 2 b 1 b1 b1 b 2 k 1 b1 0 M1M 2 M1 M1 M 2 M 2 M 1 M 1 A b 2 b1 b1 b M 2 M1 M 2 M 2 k 2 k1 k1 k M 2 M1 M 2 M M B C 1 E M1 M 2 We begin the process of designing the residual generators whose results are shown in the next section. Figure 2. 172 Sensor Fault Isolation Schematic

2094. RESULTS For the design of the UIO, the following magnitudes were substituted in the state space matrices of the bus suspension model. These values ​​represent a model of bus suspension. Body mass (M 1 ) = 2500 kg Suspension mass (M 2 ) = 320 kg Spring suspension constant (k 1 ) = 80,000 N/m Spring tire and wheel constant (k 2 ) = 500,000 N/ m Suspension Damping Constant (b 1 ) = 350Ns/m Wheel and Tire Damping Constant (b 2 ) = 15,020Ns/m Applying the design algorithm presented in Section 2, the first step regarding the test of range, the range (CE) = range (E), so we go to step 2 and calculate the matrices T and H using (9) and (10), obtaining the following: H , T Next, we calculate the matrices F and K using (14) and (16), obtaining the following: F K Substituting the matrices H, T, F and K in (3) we build a robust UIO to generate the residuals in order to detect failures in the sensors of the suspension system of the bus. The results of the simulation are presented in figure 4, where a fault-free system is first shown, then a fault in state 2 is presented and finally we have a fault in state 3. Twenty seconds of simulation are presented; faults were entered with a one second step input. However, the isolation of faults would not be possible with this approach, as shown in Figure 4, in graphs 2 and 3 active residuals are found, but the change is not appreciated. To achieve fault isolation, the approach described in equations (18) was applied, that is, a dedicated observer approach. Which is required to satisfy the conditions in (19), indicating a separation of the matrices in step two of the design algorithm, the results of these calculations are shown below. Note that matrices A, B, and E are not modified. However, a new C submatrix is ​​obtained for each UIO designed by deleting the respective row in the original C matrix, as shown below C C Figure 4. Robust UIO residuals as the simulation results do not influence the result of the fault isolation. However, it is important to note that all states have designable UIOs. As a result of step 2 of the algorithm design, we obtain the following values ​​for the H submatrices. H H Using (19) we obtain the values ​​for the T, F and K submatrices, which result as follows: T T For the F submatrices: F

210F For the K submatrices: K 3 K Substituting these values ​​in (18) we design an observer and a residual generator for each UIO. The testing of the fault diagnosis algorithm in this document was performed based on a simulation of faults in the acceleration and compensation of the bus damper sensors, states 2 and 3 (x 1 and y1 x1 x2). x 1 would be easily accessible in a real situation by integrating the output of a bus-mounted accelerometer. Figure 5 shows the residuals generated for a fault-free system. It is easy to see that after the stabilization time, the UIOs remain at zero, indicating that no system fault has occurred. Figure 5. Residuals without fault Figure 6 shows the residuals of a fault occurred in state 2. We can see that the residual in UIO 3 is activated; that is, it does not satisfy condition (20). Figure 6. Sensor 2 Fault Residuals Figure 7 shows the residuals for a fault that occurred in state 3. The residuals on UIO 2 are on. Figure 7. Sensor Failure Residuals 3 5. CONCLUSIONS This paper addresses failure detection and isolation through an unknown input observer failure diagnostic scheme applied to a ¼ bar suspension model; the robust fault diagnosis algorithm was successfully tested in the simulation. According to the UIO theory, with modified isolation they are driven by all states except those that they try to detect. So, for a fault w in a specific state, all UIO residuals except themselves will be turned on. It can be seen in the simulations carried out that, in the event of a failure occurring in state 2, UIO 3 is activated and vice versa. This corresponds to the expected behavior of an unknown input observer. From the results obtained, it must be assumed that the UIO design is a good tool to solve the problem of fault diagnosis for systems with a known mathematical model. It is also important to note that the presented design method can be applied to systems that are not fully observable by checking the observable part using a transformation matrix to obtain an observable canonical decomposition. 6. REFERENCES [1] Jie Chen and Ron J. Patton, Robust Model-Based Fault Diagnosis for Dynamic Systems, Kluwer Academia Publishers, Boston, [2] J. A. Marin, N. Pitalúa-Díaz, O. Cuevas-Silva and J. Villar-García, Design of Observers of Unknown Inputs for Fault Detection in a Two-Tank Hydraulic System, 2008 Electronics, Robotics and Automotive Mechanics Conference CERMA, Mexico, pp ,

211[3] R. J. Patton, P. M. Frank, and R. N. Clark, Troubleshooting Problems for Dynamic Systems, Springer, [4] Control Tutorials for MatLab and Simulink, Bus Suspension. [5] Jason Ridenour and Stainislaw H. Zak, Observer-Based Fault Detection and Isolation, White Paper Advancing Technology Through Collaboration, [6] S. Nowakowski, M. Darouach, and P. Pierrot, A New approach to troubleshooting uncertain systems, Proceedings of the 32nd Conference on Decision and Control, San Antonio, Texas, pp , [7] C.S. Kallesoe, V. Cocquempot, and R. Izadi-Zamanabadi, Model-Based Failure Detection in a Centrifugal Pump Application, IEEE Transactions on Control Systems Technology, Vol. 14, no. 2, pp, [8] J. Anzurez M., "Diagnosis of failures in nonlinear systems using fuzzy logic and observers with sliding modes", Thesis to obtain the degree of doctor of sciences specializing in electrical engineering, CINVESTAV unit Guadalajara, Mexico , [9] R.J. Patton, PM. Frank and R. N. Clark, "Fault Diagnosis in Dynamic Systems: Theory and Applications" Prentice Hall, [10] A. Akhenak, M. Chadli, J. Ragot, and D. Maquin and Control, Atlantic, Paradise Island, Bahamas, Design for the Nonlinear Systems and Energy Harvesting Applications. He is a member of IEEE. I. I. Lázaro is from Córdoba, Mexico. He has BSEE and MSEE degrees from the University of Michigan College of Electrical Engineering. He is currently a professor at the same institute. His research interests are electrical power quality and power electronics. He is a member of IEEE. Luis Alberto Torres Salomón has a B.S. Degree in Electronic Engineering from the Michoacana University, San Nicolás de Hidalgo Campus (UMSNH), Mexico. His research interests include control systems, troubleshooting, microcontroller applications, and mechatronics. He is a member of IEEE. BIOGRAPHY: John Anzurez Marin, received the B.E. Degree in Electrical Engineering from the Michoacana University of San Nicolás de Hidalgo (UMSNH), Mexico, in 1991; the M.Sc. Graduated in Electronic Engineering from the Technological Institute of Chihuahua, Mexico in 1997 and Ph.D. Degree in Electrical Engineering, Automatic Control option, from the Center for Advanced Studies and Research (CINVESTAV) of the IPN, Campus Guadalajara, Mexico. His research interests include Instrumentation and Control Systems, as well as Fault Diagnosis Algorithms 175

212Engineering Technology and Gender: Improving Voice and Access for Minority Groups by Curriculum Design for Distance Learning Kaninika BHATNAGAR School of Technology, Eastern Illinois University Charleston IL USA ABSTRACT The Instructional Framework for Education a Distance education has traditionally focused on teaching with technology. However, the notion of technology as a teaching tool can sometimes not take into account the characteristics of the learner and the concept of learning with technology. This document proposes a learning model that focuses on learner criteria such as access, voice, and expectation, as well as the online learning triad of content, technology, and pedagogy. The Gender and Technology Elective Seminar course has been selected as a test case, due to its intensive, discussion-oriented, engagement structure with a higher likelihood of a diverse student body. Keywords: Engineering technology, Curriculum design, Gender, Minority groups and Distance education. 1. LITERATURE REVIEW In general, distance learning students are reported to have different characteristics and requirements from traditional students, and it is found that the virtual classroom differs significantly from a traditional classroom [1]. Online instruction can potentially alter classroom power dynamics, allowing the hesitant, introverted, those not as comfortable speaking English, and shy students to participate without the need to speak in class. Masters & Oberprieler [2] reported student participation as one of the main benefits of online education. They also pointed to the need to guarantee access and ensure that no group of students tended to unduly dominate the virtual educational landscape. The change in the power dynamics in the classroom affects the promotion of diversity in general and of the students in particular. The facilitating nature of online courses, which promote the participation of diverse groups, is a common theme found in the distance education literature [2, 3, 4]. However, the literature has tended to focus more on online teaching than online learning. It has been suggested that good online teaching generally stems from the constant interplay between 3 components, namely content, pedagogy, and technology [5]. Kohler et al (2004) discuss this triad in the context of the role of technology in online teaching. They define the content as the set of central ideas, knowledge, procedures, resources, etc. Technology according to this model consists of presentation techniques, be it chalk and blackboard, or the Internet and digital video. The notion of technology is based on interaction and presentation. Pedagogy according to Koehler et al (2004) forms the third leg of the triad, which encompasses the process, practice and teaching methods. Pedagogy includes methods of instruction and assessment. 2. TECHNOLOGY AND PEDAGOGY This article advocates the inclusion of three additional criteria to complement the aforementioned didactic triad. It is suggested that although the model in its current form fully informs teaching with technology, it does not, however, capture the problems inherent in learning with technology. In the context of the Gender and Technology seminar course, a set of learning criteria, namely voice, access and expectation, are identified and discussed in depth. The voice represents the degree of participation of a student in responding and collaborating with their classmates in a virtual classroom. The literature suggests that female participation tends to increase in online instructional situations [6]. Access represents the ability to use technology, a factor that could often function as a gatekeeper to online education. It has been widely reported that minority groups continue to suffer the most from lack of access to technology, such as Internet access [7]. Student participation is meaningless without access. Therefore, although the virtual classroom can serve to strengthen the voice of the participants, access to technology can still be problematic. Expectation is the third leg of the learning triad and forms the motivational backbone of this model. Expectation comprises a large part of a student's response and experience in any course. Expectancy theory and motivation theories have reported significant gender differences in expectation of success and thus lead to differential participation and outcomes [8, 9, 10]. The three criteria identified in this document all address the learner. It is important to note that simply improving technology instruction fails to address the whole picture. Teaching with technology is a function of the teacher who can do their job in the most effective way possible, providing comprehensive content aided by technology and supported by insightful pedagogical models. However, such an instruction is still presented from the perspective of the sender and not the receiver. The strongest teaching practice is not effective unless it is fully communicated to the learner. This is where the notion of a learning triad comes into play. Figure 1 illustrates the modified instructional framework after incorporating factors for learner experience in the instructional triad. The first criteria of the voice learner or participation determines who speaks and who does not. In a virtual classroom, online posting replaces speaking and traditional power dynamics can change. The skill set required to sit down at the computer, think, and rethink an asynchronous post is quite different from the skills required to engage a teacher in a classroom and articulate a meaningful synchronous response. The experience can be intimidating for minority groups. such 176

213the bisection of participation can occur along ethnic and gender lines. The second access criterion is problematic, as access to technology may, in fact, work to the detriment of minority groups. Although it has improved a lot since the last decade, access to technology for minority groups remains relatively low [7]. However, it must be recognized as a critical factor when it comes to learning with technology. Recognizing potential problems with access can lead to creative rethinking. For example, all course reading material may be available through online course booking. Offering the use of lab machines to students who may not have a computer at home can improve access. The first virtual class discussion can be used to ensure that all students can use a computer with Internet access for at least a couple of hours two to three times a week. The instructor can proactively help by gathering resources and forming study groups to ensure everyone can get online. The expectation of the third learning criterion is a behavioral trait. According to Vroom (1964) an individual's motivation is governed by their expectation of achieving a specific goal and the value that the individual places on that goal. In the context of teaching and learning, significant differences are reported in expectation values ​​for various groups [9]. The level of expectation is an important factor in determining a student's motivation. In the context of a virtual classroom, motivation becomes critical. Despite the excellent technology, pedagogy, and content, the expectation of success and the value that the student places on that success is an important factor in the learning process. The inclusion of expectation in the learner triad is an acknowledgment of motivational factors in the learning process. Although ultimately a characteristic of the learner, the inclusion of the notion of expectation can allow the instructor to be more aware of individual differences and adapt the online conversation accordingly. Instructor and Student Judgment 3. GENDER AND TECHNOLOGY: COURSE STRUCTURE The topic of gender and technology is addressed in numerous courses across the country in Women's Studies programs and Technology departments. The course design described here refers specifically to the WS4112 Gender and Technology Senior Seminar course currently offered by the Eastern Illinois University (EIU) School of Technology, Charleston. In its current form, the course is delivered in the traditional classroom lecture format with a modest online component. This document proposes to modify the course to adapt it for fully online delivery. In the process, it is suggested that it will lose none of its content and analytical rigor, but will gain a degree of flexibility to give minority groups a greater voice. The course is designed to engage students in an ongoing conversation about the various intersections of gender with technology. Students are exposed to academic literature on gender and education, with a focus on technology education. The course is a dialogue on the relationship between women and technology; it is based on the historical context, navigates the current debate and formulates a future forecast. Key issues that have affected women's mobility into technical occupational fields are discussed. Despite the tremendous advances that have been made in the last twenty years, the gender gap in technology unfortunately continues to persist [11]. The class discussion can focus on a number of arguments. These may include, but are not limited to, gender differences in achievement outcomes, major differential options in college, difference in persistence in Science and Engineering (S&E) majors, and finally, difference in hiring in colleges. science and technology careers among male and female scientists. , engineers and technologists. Gender differences persist to this day in wages, advancements [12]. as well as differential rates of wear throughout life [13]. Existing disparities will be discussed in light of deeply held belief systems and biases in both genders, which have percolated at a systemic level into our educational institutions. The course will be guided by an underlying theme of the social construction of gender, where a constructivist argument will inform discussion and praxis. The context for discussing women and technology education will be established through the presentation of introductory readings on the broader context of women and education. Reading material will include gender-related motivational theories [8, 177

2149], historical background necessary to inform the current debate, as well as a future perspective that turns the debate on the so-called child crisis [14]. Currently, this course is based on face-to-face seminars and student presentations for vigorous discussion. Porting the entire instructional experience online requires responsive course design to retain the robust discussion component of face-to-face instruction. The following sections describe the pedagogical structure of the modified course. Course objectives are adapted from the existing course WS 4112G, School of Technology, and Women's Studies Minor, Student Learning Objectives at EIU, Charleston. Course Objectives Upon completion of the Women and Technology senior seminar, students will be able to: Articulate women's contributions to technology, science, and engineering Identify personal assumptions about issues that affect women, men, and technology Articulate influence of technology in altering the socioeconomic landscape that has shaped gender role expectations. Critically analyze the context in which the personal assumptions that inform and/or cause harmful behaviors and actions are formed. Develop the ability to make informed and responsible individual and social judgments in the context of social responsibility. Explore and imagine alternatives that exist to the current ways of thinking and acting on the problems that affect women and men in a technological world. Demonstrate the ability to research and communicate with clarity and insight on gender issues in the context of technology Instruction and Assessment The course in its current form will be modified to fully accommodate online instruction. The various activities, reading material, and assessment exercises for the virtual classroom will be presented in a weekly format to allow students to become familiar with the routine of the course. Lectures will be posted as PowerPoint and/or video files each week, and students will be required to answer specific questions and talking points from these lectures. Each student will get a unique set of questions. Beyond academic honesty, individual questions can be customized and used to enhance voice, opinion, and debate. The online format will allow for the periodic posting of grade points so that students can track their performance on an ongoing basis. A list of tentative topics for this course is presented in Table 1. The textbook can be, at best, a limitation and, at worst, an obstacle in an ever-evolving landscape of academic debate. magazines, book chapters and/or electronic resources as determined by the instructor. The notion of social construction of gender will be present in the course modules as an underlying theme. Table 2 illustrates the design of the evaluation activities based on the types of interactions in the classroom and the learning objectives. The learning objectives are linked to the learning goals according to Bloom's Taxonomy [15]. Each activity is described in the following sections. Table 1: Topics to explore during the course of the seminar class Topic Identity Education Careers Motivation and behavior Contributions Stereotypes Language Body Issues Ways in which gender intersects with other aspects of identity such as race, ethnicity, class, age, etc. Historical context, case studies, cultural and subcultural contexts of women's lives Role of choice, opportunity, influences and expectations (family, peers, society, media), recognition, credibility, praise Theories of attribution and expectations, role of self-esteem, gender role expectations Contributions of women to the disciplinary matrix of science, mathematics, engineering and technology Role of the media, advertisements, image construction, beauty versus brains Obfuscation, masculinity in high-profile literature technology Reproductive Technologies: Context, Narrative, Role, Body Image Table 2: Learning Interactions, Objectives, and Assessment Activities Interaction Type Learning Objective Bloom's Taxonomy Goal Assessment Activity Personal Reflection Understanding/Analysis Introduction/Goodbye; Online Journal Peer Discussion, Analysis Comprehension/ Analysis Talk/ Discussion Posts Instructor to Student Critique, Discussion Synthesis/ Evaluation Online Journal Chat/ Discussion Posts Instructor to Group/Class Critique, Discussion Synthesis/ Evaluation Group Essay 178

215Activity 1-Introductions/Goodbye: Each incoming student will be required to post a short introduction, along with a critical commentary on any ideas from a given set of problems and/or topics. Students could choose between words such as stereotype, gender, minority, role, identity, language, and others. Students will essentially finish the sentence: I think the word means. At the end of the course, each student must publish a brief response to her own publication made at the beginning of the period. They can revise, modify, completely alter, or declare that their views remain unchanged. In each case they must explain why, or why not. The exercise is designed to allow them to reflect on and synthesize their learning experience during the course. Activity 2-Online Journal: Online Journal is one of the current assessment activities. It is proposed to continue in the virtual classroom. Over the course of the term, each student will be required to prepare three reading response assignments, where they will critically reflect on a self-selected reading or set of readings. Each journal (2-3 pages, double spaced) will be posted to the course site via WebCT. Journals should represent students' views of the readings and their critique of the author(s). Students will be encouraged to use multiple readings for this task so that they can compare and contrast the authors' arguments and formulate their own position in the academic debate. Activity 3-Chat/Discussion Posts: Online chats or discussion sessions are designed as an integral part of the virtual classroom. Students will be required to post comments and questions, as well as post responses to a question posted by at least one other classmate. Your questions and comments will be based on the set of readings for the particular week. Students must use at least one external reading reference in their publication. Additional credit will be given for the use of more than one external reference used in a meaningful way to support your argument. Activity 4 – Group Essays – Students will form online groups of 3 or 4 and come up with an essay topic based on their analysis of the course material. The end product of this project will be a final document prepared as a group activity. Students must complete this activity in three stages: The first stage will consist of formulating a topic of interest through internal online group discussions. The choice of topic is open, but it must be related to the course readings. The second stage will require each group to work with the instructor, where they are presented with an outline of the proposed essay. The outline will consist of at least four components, each presented by a team member: Introduction of the topic and contextualization of the problem Presentation of existing academic work Presentation of the team argument/response/point of view/team position Synthesis of a conclusion The Instructor will provide feedback, and critique at this stage of the project. The discussion session will be scheduled in half-hour synchronous meetings held online with each team. The third and final stage will consist of the drafting process, where each of the four sections will be addressed in detail. Special attention will be paid to the use of references in the presentation of external material. The essay (10-15 pages excluding references, double spaced, APA format) will be posted online as the final delivery of this activity. Learning with Technology Each assessment activity described above is adapted to work with two of the three learning criteria described above. The structure of the course is designed to encourage interaction and improve the voice. Online journals, discussion posts, intro/out, as well as group essays are a means of allowing diverse opinions, viewpoints and debates to emerge in a supervised and safe collegiate environment. The voice factor is expected to not only improve for women and minorities, but also to see an overall improvement in participation for all. Access forms the second stage of the learning triad. It is true that access is not a function of course structure or pedagogy. This is an external factor that cannot be controlled by the instructor. However, acknowledging him as an important player can help the instructor better understand some of the challenges his students may face in their online behavior. Expectation forms the third leg of the learning triad. The expectation of success is driven by motivation and is not a function of pedagogy. However, as noted above, recognizing this variable is a necessary step for the instructor to successfully communicate with its diverse student body. An understanding of expectation behavior can lead to more constructive feedback from the instructor to individual posts, discussions, and comments. 4. CONCLUSION In this paper, a virtual course design for the Women and Technology seminar has been presented. A variety of assessment activities are presented to enable robust debate and discussion in an online format. It is suggested that the technology-based learning triad of content, technology, and pedagogy should be informed by the technology-based learning suite consisting of voice, access, and expectation. Future research can empirically test the relative influence of each factor in this model. It is important to note that by considering only the teaching triad, we could be missing a large number of conditions that can hinder a learner. The teaching triad can be applied uniformly to all students, but the results may be far from uniform. The reasons for the discrepancies are likely to be a function of the student's set of criteria, which in turn would determine the success or failure of that course. This document proposes to adapt the learning criteria within the framework of a superior seminar course on gender and technology. Future research may focus on unraveling the learner triad and looking for possible additions and/or modifications within the learner dynamic. It is critical to recognize that the various characteristics of learners will interact variably with instructional input, and course design must take this variability into account. 179

2165. REFERENCES [1] Rovai, Alfred P. In search of higher persistence rates in online distance education programs. Elsevier Science School of Education, Regent University, Virginia Beach, VA, [2] Masters, K. and Oberprieler, G. Fostering Equitable Participation Online Through Curriculum Articulation, Elsevier Ltd., [3] Price , L. Gender difference and similarities in online courses: challenging stereotypical views of women, Journal of Computer Assisted Learning, vol. 22, number 5, 2006, pp. [4] Sullivan, P. (2002). It's easier to be yourself when you're invisible: College students talk about their experiences in the online classroom, Innovative Higher Education, Vol 27, No 2, December pp [5] Koehler, M. J., Mishra, P., Hershey, K ., and Peruski, L. With a Little Help from Your Students: A New Model for Faculty Development and Online Course Design, Journal of Technology and Teacher Education, 12(1), 2004, pp [6 ] Caspi A., Chajut, E and Saporta, K. Participation in class and online discussions: Gender differences, Elsevier Ltd., [7] Pachon, H. P., Macias, E. E., and Bagasao, P. Y. Minority access to technology Information: Lessons Learned, Michigan State University, October [8 ] Eccles, J. S. & Wigfield, A. Beliefs, Values, and Motivational Goals, Annual Review of Psychology, 53, 2002, pp. [9] Eccles, J. S., Adler, T. F., Futterman, R., Goff, S. B., Kaczala, C. M. , Meece, J. L., and Midgley, C. Academic expectations, values, and behaviors. In J. T. Spence (Ed.), Student achievement and motivation. achievement, pp San Francisco, CA: W. H. Freeman, [10] Vroom, V. Work and motivation. New York, NY: Wiley, [11] Beizer, D. Technology access equity and digital divides, Technopolity, accessed 9 May 2010 at [12] Lips, H. M. The gender wage gap: Debunking the rationalizations, Women s Media, accessed April 5, 2010 at [13] Xie, Y. and Shauman, K. A. Women in Science: Career Processes and Outcomes, Cambridge, MA: Harvard University Press, [14] Kohn, D. The gender gap: Boys lagging, CBS News, 60 Minutes, 2003, accessed May 11, 2010 at [15] Bloom, Benjamin S. Taxonomy of Educational Objectives, Allyn and Bacon, Boston MA,

217New concepts in engineering education through e-learning Vlasta RABE Faculty of Education, University of Hradec Kralove, Hradec Kralove, Czech Republic and Stepan HUBALOVSKY Faculty of Education, University of Hradec Kralove, Hradec Kralove, Czech Republic ABSTRACT This article is about about new concepts in engineering education with respect to global trends in the economic and information environment and their influence on engineering education, and also with the possibilities of ICT implementation to integrate face-to-face and online learning. The systemic approach in education and its impact on the new possibilities of e-learning are presented. Keywords: Engineering education, role of the teacher, role of experiments, e-learning, collaborative learning, quality of education. 1. INTRODUCTION In the traditional approach not only to the teaching of engineering, the professor gives lectures and assigns readings and convergent problems of a single discipline, the students listen, take notes and solve problems individually. Today, the demand for university education changes to the new wave of mentoring, the ability to work in a team (for example, in projected teaching), cope with changes, be flexible and innovate. In addition, it is necessary to focus on scientific work and the quality of scientific work by providing information and knowledge, which will thus be measured, what students know, and how they can use it in practice. Today, modern information technologies will enhance the development of new methods of searching, acquiring, organizing, processing, exchanging and using information from various sources and its distribution to users. This article is not intended to give a new method, technique or tool to solve current problems, but rather an experiment to look to the future. Applications allow automatic access to information anytime, anywhere. Through modern ICTs it is possible to share information and knowledge and use it effectively. Now access to computers and the Internet is not an issue, but a growing number of higher education leaders see more of the need to increase information literacy. It means the ability to find relevant information according to the needs. 2. CHANGING CONCEPTS IN ENGINEERING EDUCATION The role of engineering education in the development of the information and knowledge society is an active approach to learning and the possibility of using one's own practical experiences in the education process. In the past, universities were especially required to create and disseminate knowledge, now the demand for university education is based on the new wave of mentoring, the ability to work in a team (for example, projected teaching), cope with changes, be flexible and innovate. Now these roles can be enhanced by promoting learning that ensures people can take advantage of the information resources available to them. Such efforts can and should include improving the ease of access to information and educating people to evaluate and use information effectively. Fig. 1. A holistic view of active learning (by John Wiley & Sons, Inc.) 181

218The economic prosperity of universities will depend on the quality of education and research and their proactive behaviour. Universities are committed to education throughout life, because in the information society, information processing and its exchange and presence are part of everyday life. The focus of new concepts in the European educational system is shifting from learning to do (or know) to learning to learn. The concept of lifelong learning also corresponds to the broad efforts and policy initiatives to develop ICT-related education in individual countries. Active learning is classroom instruction that engages students in activities other than watching and listening to a teacher. Working individually or in groups, students can be asked to answer questions, solve problems, discuss, debate, reflect, brainstorm, or ask questions. Cooperative learning is instruction that engages students in team projects under conditions that meet various criteria, including positive interdependence (team members must trust each other to carry out their responsibilities) and individual accountability by each party. of the project.[7] 3. COLLABORATIVE LEARNING Collaborative learning is based on teachers helping students to respond to literature by taking a more active role in their own learning. The cooperative learning tradition tends to use quantitative methods that look at achievement, that is, the product of learning. The collaborative tradition takes a more qualitative approach, analyzing student speech in response to a piece of literature or a primary source in history. Cooperative learning is defined by a set of processes that help people to interact with each other to achieve a specific goal or develop a final product that usually has specific content. It is more directive than a collaborative system of governance and closely controlled by the teacher. While there are many mechanisms for group analysis and introspection, the fundamental approach is teacher-centered, while collaborative learning is more student-centered. [5] Fig. 3. Knowledge sharing through collaborative learning 4. ENGINEERING EDUCATION DISTINCTION Fig. 2. Quality of education (by the M. Baldrige award) In the last century, an increasing number of innovative programs and methods and instructional materials in engineering education. . But the changes, which will move engineering education in the desired directions, can be grouped into four categories: [1] revisions in the engineering curriculum and course structures implementation of alternative teaching methods and evaluation of their effectiveness establishment of development programs of instruction for faculty members and graduate students adoption of measures to elevate the status of teaching in society and in institutional hiring, promotion, and reward policies In addition, discussions of technical problems are important in engineering education. It can be possible through modern ICT, information and knowledge sharing and effective use. Now access to computers and the Internet is not a problem, but a growing number of leaders in 182

219education sees more in the need to increase information, and to be able to effectively identify, locate and use that information for lifelong learning. Systems thinking Related to this fact, systems thinking together with system dynamism offers in education a common framework for preservation. cohesion, sense and motivation at all educational levels, at first. The second element is the emphasis on the active cognition of the students, which embeds new challenges and interests for learning in the tutorial, as is common in experimental laboratories. These two innovations, taken together, help to enhance the creativity, curiosity and vital energy of young people. It is necessary to learn systematically and use systemic thinking as a common tool for daily activities. A strong attribute of systems thinking is that there is a connection between philosophy, politics and culture with daily work. Brainstorming creating creative ideas Brainstorming is a group technique for generating new and useful ideas and promoting creative thinking. It can be used to help define what project or problem to work on, to diagnose problems, to remedy a project by finding possible solutions, and to identify potential resistance to proposed solutions. I tested this method on a new subject with students in the form of distance study (practice teachers). I gave them some material on the web and together they prepared lessons for the whole course. We check it in the tutorials. But, in my opinion, this way is possible only with experienced people. interesting. First, it is one of the few original attempts to have a coherent and comprehensive approach to measuring the information society. As such, it is expected to stimulate further discussion and research within the professional statistical community, leading to improved statistical literacy in Europe. Second, it provides a single source of real-time data that supports many of the new areas of ICT research. I see the advantage in applying benchmarking in collaboration between universities and learning from some of the best, just like in business. Create a cooperative environment in which a comprehensive understanding of performance and "best-in-class" process enablers can be obtained and shared at a reasonable cost. [4] 5. INTERNET IN EDUCATION Today universities offer high quality opportunities not only in engineering education. ICT skills were obtained through formal education and training or, more informally, through the use of and experience with ICT. High standards in mathematics, science, and computing in most countries held promise for the future supply of highly-skilled professionals. The promising medium and long-term supply of ICT professionals was associated with an increasing number of young people in higher education and vocational schools. Benchmarking in higher education Currently this method is used to measure the quality of education. We can find it e.g. on web page For some years now, statistical indicators on the Information Society have been central to the policy-making process. This has been best demonstrated through the eEurope Action Plan benchmarking exercise as a key activity. Recognizing this need and prompted by the difficulties in obtaining reliable and appropriate statistics, the IST program supported a pan-European research effort during the Framework Programmes. The main objective has been to develop and make available methodologies, tools and new statistical indicators that help to alleviate the deficit in this field. It is in this context that the SIBIS project was launched. There are at least two main reasons that make this document Fig. 4. Technology based training (by Back, 2001) One of the modern learning methods is e-learning. All educators approach this new paradigm with varying degrees of enthusiasm and concern. It is important to consider both the pros and cons of online learning. So that we can be better prepared to meet the challenge of working in this new environment, as well as take advantage of the new opportunities it has to offer. In all countries information and communication change rapidly 183

220technologies and the growth of ICT-related activities in all sectors have led to a shortage of highly-skilled ICT professionals. Hiring difficulties indicated imbalances between existing skills and company demands. ICT skills were obtained through formal education and training or, more informally, through the use of and experience with ICT. The mobility of professionals complicated the prospects of preserving and developing ICT skills and abilities. There are many valid reasons why online programs are fast becoming a popular form of distance learning in higher education today. The online environment offers unprecedented opportunities for people who would otherwise have limited access to education, as well as a new paradigm for educators in which dynamic courses of the highest quality can be developed. While online programs have significant strengths and offer unprecedented accessibility to quality education, there are also weaknesses that can pose potential threats to the success of any online program. Especially in engineering education, I would prefer blended learning. [9] At our university we are embarking on powerful work on preparing elective subjects for combined study forms in the WebCT virtual educational environment, and for full-time study forms as support materials. We also do video conferencing, but in my opinion it is not very suitable for education. [8] In e-learning, tutorials are a way in which teachers complement online learning with a face-to-face component. Typically, a teacher will arrange a time when students can come see him or arrange for students to work in a learning center with the help of a tutor. [10] 6. E-LEARNING - PROS AND CONS It is important to consider both the pros and cons of online learning. So that we can be better prepared to meet the challenge of working in this new environment, as well as take advantage of the new opportunities it has to offer. The first e-learning course was created at our university in the year. Since the year 2001, our faculties have embarked on a powerful work on the preparation of elective subjects for combined study forms in the WebCT virtual educational environment, and for full-time study forms as supporting materials. . Inter-university studies The University of Hradec Kralove has devoted its attention to the problems and issues associated with e-learning since the beginning of Already then, voices could be heard calling for projects of cooperation and collaboration between university-level institutions in distance building. e-learning courses or unified study programs. An interesting possibility for collaboration was identified at the 2003 conference on e-learning in higher education organized by the university of Zlin, which involved sharing courses, including relevant teaching staff, and providing them to students from partner institutions, which led to to the exchange of students through distance courses supported by e-learning. Since 2005, the first Czech virtual mobility has been carried out in the RIUS project (the implementation of inter-university studies in a network of selected universities in the Czech Republic), in which three universities (Hradec Kralove, Plzen and Zlin) cooperated. It means sharing both the courses and the teaching staff of the universities participating in this project, the possibility of absolve part of the study program at the university. The courses were delivered in the form of distance education with e-learning support. These e-learning courses are organized in the form of face-to-face introductory tutoring for both teachers and students at the students' alma mater university, directed self-studies supported by a virtual learning environment and, as necessary, through further interim face-to-face meetings, combined with live exams. The face-to-face meeting can be replaced by synchronous video conferencing over the Internet. Depending on prior agreement with a partner university, students may choose these courses within the context of their compulsory electives. By making the best use of the range of interuniversity studies offered to them, students can not only enrich their own curricula with topics attractive to them, but also learn about new educational methods and tools and participate in the genesis of an expanded system. of inter-university studies in the Czech Republic, which allows the mutual exchange of study subjects and experts throughout the university network. Due to these projects, many students had the opportunity to study at least one of the 164 subjects in a virtual learning space in the academic year. 7. PROBLEMS IN ENGINEERING EDUCATION There are some specific problems, especially in the field of engineering: mismatch between importance and perceived needs and the amount of coverage they receive in the classroom lack of experience (most people with skills teaching, management, and software engineering are in industry or others primarily for financial incentives.) Lack of relevant text and multimedia courseware (Every book is different. It must be confusing for the technically oriented instructor to see such an incredible diversity of methods , approaches and non-standard complex forms.) Inadequate computer technique 184

221(All computers, techniques and materials should be closely associated with the teaching of a specific topic) 8. CONCLUSIONS In my opinion, the impact of engineering education in the information society depends on the quality of the lessons, the material of learning and, especially, of the quality of the teacher, each instructor must develop his own style. using those techniques that suit them best and seem to achieve their goals, but online learning cannot replace face-to-face education. In engineering education there is a similar situation, as in the case of the education of future teachers, direct dialogue and practical experiences are needed. [7] L.Newton, & L.Rogers: Thinking frameworks for ICT planning in science classes. School Science Review, 84 (309), , [8] V. Jehlicka, V. Rabe: Online ICT Training Courses in the WebCT Environment. ED-MEDIA, Vienna, 2008, ISBN: [9] V. Rabe, V. Jehlicka: E-learning and blended learning at university. ED-MEDIA, Vienna, 2008, ISBN: [10] I. Cernak, E. Masek: Possible approaches in the implementation of e-education at the university. 2009, ISBN Based on experience, it seems that in full-time forms of education, we should think, if in all cases necessary, to introduce e-learning. In the case of engineering education, it seems that blended learning, which combines online and face-to-face approaches (according to our experiences), is the most appropriate learning method. 9. REFERENCES [1] Committee on Science, Engineering, and Public Policy: Reforming Graduate Education for Scientists and Engineers, National Academy Press, Washington, D.C., 1995 [2] OECD: Measuring Scientific and Technical Activities. "Frascati Manual 1980", OECD, Paris, 1981, p.26, to "Frascati Manual 1993", ibid, 1994, p.30, [3] Commission for National Investment in Higher Education: Breaking the Social Contract. The Fiscal Crisis in Higher Education. Council for Educational Assistance, RAND Corporation, Santa Monica, CA, 1997 [4] Anderson, T. D. & Garrison, D. R. (1998). Learning in a networked world: new roles and responsibilities. In Gibson, CC. (Ed.), Distance learning in higher education: Institutional responses to quality outcomes (pp. 1-8). Madison, WI: Atwood Publishing. [5] J. Clifford: Composing in stages: The effects of collaborative pedagogy. Research in Teaching English 15 (1), 1981, pp [6] V. Basili and A. Turner. Iterative improvement: a practical technique for software development. IEEE Transactions on Software Engineering (1975). 185

222An online support approach to effective teaching and learning, a case study Eudes K Tshitshonu Department of Mechanical Engineering Faculty of Engineering and Technology Vaal Vanderbijlpark University of Technology 1900 South Africa 1. SUMMARY The gap created by brain drain due to the global mobility of the labor force, on the one hand, and the speed at which new knowledge and technologies are developed, on the other, is increasing every day. The paradigm shift is paramount; business models are being reviewed and renewed. Expert hands are not the only characteristic demanded for the implementations of new technologies. Even more trained minds with sufficient backgrounds and the ability to investigate novel approaches are required. The stakes are higher today, as sustainability and environmental awareness are high on the agenda of any new engineering and innovation effort. South Africa has embarked on a venture to increase its supply of more engineers, technologists and other essential career streams needed to fill the skills shortage gap. As key players, tertiary institutions are reaching out to mass enrollment with the aim of producing as many graduates as possible and doing their bit to meet the challenge. The competency of candidate students in basic numeracy and literacy is a great topic of debate. The math, science, and physics skills of college applicants seem to leave a lot to be desired. An ongoing debate suggests extending the average three-year stay for the undergraduate university qualification to four years, which may be detrimental, especially in an environment where the workforce is starved due to skills shortages. As the relevance of the curriculum offered is questioned, the tools available to assist in the learning process are investigated and their effectiveness is evaluated. This document analyzes the provisional results of an observation and an ongoing study on the relevance of the effective use of other modes of teaching and learning that technology offers, in the particular case of managing large class groups. A comparison of pass rates between groups of students participating in traditional classrooms and those exposed to the blended learning mode that includes an online component. Keywords: blended learning, embedded questions, Moodle, online tutorial, pass rate 2. INTRODUCTION Innovation, customisation, upgradable business models and new ways of organizing work; These are some of the expected attributes needed to stay competitive in the 21st century economy. People and organizations need to continually update their skills and find new ways to manage knowledge and information. In 2000, the American Society for Training and Development (ASTD) and the National Governors Association (NGA) came together to form the Commission on Technology and Adult Learning [1], with the mandate to define and foster an environment of technology-enabled learning that results in an engaged citizenry and a skilled workforce for the digital economy. The commission focused its attention on technology-enabled learning designed to increase workers' knowledge and skills so they can be more productive, find and keep high-quality jobs, advance their careers, and positively impact success. their employers, their families and their communities. . E-learning was highlighted for its potential as a tool to reduce the costs of workplace-related education and training, as well as its ability to offer universal access to best-in-class learning content, as well as a wide variety of content available. Anywhere in the world. In traditional training delivery, the learner must adhere to the training delivery schedule for maximum benefit, and in most cases is limited to the content as it is presented in class. This study observes two groups of students enrolled for introduction to applied mechanics. The first group is made up of full-time students. The second is made up of part-time students who can only afford two hours of class a day after work. In an environment where students are exposed to intensive work before attending two-hour afternoon classes, their peak performance is not expected. Human limitations, unresolved plant/shop issues, could take their toll. In some cases, they simply cannot afford to attend class, particularly in cases of unexpected closings or any other work related matter. Computer Based Learning (CBL) kicks in and is revealed as the preferred option. Wentling et. Karon currently advocates the convenience factor of well-designed computer-based training [2]. Karon argues that any well-designed computer-based training, whether based on a local intranet or delivered over the Internet, is more convenient than traditional instructor-led training or seminars, since Computer-Based Training courses ( CBT) at your own pace are available when students are ready and able to complete them, not just when the seminar or class is scheduled or when the instructor is available. Class content becomes portable. However, the portability of the class content does not overshadow the role of the trainer. Trainers must redefine their role as their work design and environment change. Building on their traditional role of responsibility as educators, they now range from instructional designer to instructional developer, trainer, and materials support. As the instructional designer, the trainer performs the initial analysis and instructional design tasks. They also include tips on the exercises and review of the course. As instructional developer, the trainer writes course materials, exercises, support materials, and develops overhead charts, exercises, etc. [3]. Ways of certifying the result or the learning process are required. Tests, exams, quizzes and a large number of 186 different

223Types of evaluation methods are used to measure and sanction the result as acceptable or not. While most education/training systems do implement and agree that some form of assessment is necessary, there is a paradigm shift in the very essence of assessment. Evaluate learning or evaluate to learn? A drastic change is taking place in the culture of evaluation. The authors advocate assessment as a tool to improve learning [4, 5 and 6] rather than a simple test of learning outcome. This redefines the training experience as a project managed by the trainer in his role as the main facilitator connecting the various stakeholders. The student is one of the key stakeholders in the whole process and no real training/learning is likely to take place without her involvement. At this point it is convenient to redefine the role of the student in the updated configuration. 3. ONLINE RESOURCES FOR LEARNING AND TRAINING As the training environment evolves, traditional classroom contact is no longer the sole and primary point of delivery. In online/distance learning environments, the trainer has less direct and sometimes almost no real-time interaction with the student. That makes it difficult to tackle problems where spontaneous feedback in the classroom would be of much added value. In a classroom, the trainer can interact with the students through a set of input/response sets that would at least serve as an indication of whether the topic under discussion is accessible and understandable to the students or not. Facial and body languages ​​are part of the communication protocol and can facilitate interaction, particularly in small groups. Short impromptu activities or one-off assignments provide real-time feedback and pinpoint gray areas where more emphasis may be needed. Lack of engagement and/or motivation is likely to be addressed early, making it easier to achieve learning objectives. The situation is entirely different with out-of-sight students whose challenges go unnoticed unless effective communication channels are open and working. The attitude of the students towards the training, the means of delivery (technology), their self-motivation is of paramount importance. Several researchers have identified individual characteristics that seem to describe a successful online student. Gibson [7] found that it is essential for distance learners to be focused, better manage their time, and be able to work independently and as members of a group, depending on the mode of delivery and location of the distance course. Other studies suggested strong self-motivation, self-discipline, independence, and assertiveness as important characteristics of online learners [8]. While the onus seems to rest entirely with the student to address her discipline and work attitude, the content and context must be provided in a way that is conducive enough to the effective delivery of the content. Clark [9] approaches the latter through the principles of multimedia, contiguity and modality. The multimedia principle suggests the use of graphics such as line drawings, tables, and photographs, and motion graphics such as animation and video. Graphics should be congruent with the learning content and add value to the learning content instead of overpowering and distracting it. Clark points out a common violation of the contiguity principle found in screen scrolling. Showing graphics near or next to related text allows for a better descriptive effect than seeing the graphics first and having to scroll down to see the descriptive text or vice versa. Learning occurs in individuals through working memory, which is the active part of our memory system. Working memory capacity is necessary for learning to occur. Learning is depressed when working memory is overloaded. If the words and the images they describe are separate, the learner needs to expend additional cognitive resources to integrate them. The contiguous display of images and words achieves integration for students, leaving their focus on the learning content [10]. Audio inserts are praised for adding additional value to online materials and enhancing the learning experience as suggested by the modality principle [9]. 4. TARGET GROUP Two main groups of students were sampled for this study, full-time and part-time working students, all enrolled in Introduction to Applied Mechanics. A mixed delivery mode made of online resources and face-to-face delivery of class content was applied. The groups were subdivided as follows: Groups 1 and 2 were made up of part-time work-study students, while Groups 3 and 4 were made up of full-time students. All groups had real-time class presentations. Group 2, however, attended the class in real time simultaneously with group 1 but from a different location, from a different province of the country via videoconference. Groups 1, 2 and 3 were exposed to online resources and Group 4 had no online exposure. Students in groups 1 and 2 freely chose to accept their employer's offer to study and improve their qualification for a better position on the corporate ladder with the same employer after graduation. For the purposes of this research, it was assumed that everyone had a clear vision of what to expect after the training, so motivation and attitude towards the training were not considered a major concern. The same assumption was applied to Groups 3 and 4, although most of the full-time students were much younger compared to the candidates in Groups 1 and 2 and would show some immaturity in terms of their motivation to complete their studies. academic program. However, in a country where the inequalities of the past are corrected and a strong emphasis is placed on the benefit of education, the study was carried out under the assumption that full-time students should also be sufficiently motivated, with the attitude right way to fuel your success. While the attitude towards training might not have been a problem, the attitude towards the means of delivery might instead have been a cause for concern. The digital divide remains a challenge for many today, especially to the extent that basic life can be carried out without having much to do with computing devices. That is still true for a large number of communities in Africa. Some students would still feel intimidated using the computer. Our role was to create, promote and work as a facilitator so that the platform was operational in the most optimal way possible. 5. BLENDED LEARNING: CASE STUDY The students were introduced to the theoretical concepts, as well as the respective practical aspects and implementations through normal and traditional face-to-face classes. A set of worked examples and tutorial activities were introduced. After a topic was discussed during contact sessions, students in groups 1, 2, and 3 would be offered the opportunity to review content material and attempt online tutorials. Selected worked examples and additional tutorials were expanded in a way to discourage students from regurgitating repetitive memorized routines and instead highlight the fundamentals as well as the underlying theory. The process was facilitated through the Moodle Course Management System (Modular Object Oriented Dynamic Learning Environment). The easy-to-use Moodle environment has been investigated as a potential piece of support system to address deficiencies and improve the teaching and learning experience for large groups. The interface is easy to learn for both the student and the teacher/professor. Basic keyboard, typing 187

224and the click of a mouse button are enough for even a student with no previous computer exposure to start using the interface. A quick introduction and training session would be used to start the process and take the intimidation out of a lack of computer literacy. Selected class materials might be available through the add resources command. The installation was used as a virtual bulletin board where highlights were displayed for reference. Lecture notes, worked sample solutions, Internet link are some of the features used in the implementation of online content offered in mechanics classes. The command enables the design of web pages, thus improving the delivery of learning content with multimedia materials. An evaluation interface makes it possible to upload quizzes and lesson paths. A powerful learning environment is characterized by a good balance between discovery learning and personal exploration, on the one hand, and systematic instruction and guidance, on the other [11]. The activity functionality in Moodle was used to make such a powerful learning environment possible. Embedded questions were used, accommodating calculated and worded/phrased results, as well as multiple choice in a single question format. Graphics/images can be added as needed in most engineering problems and care was taken to format the descriptive graphics and images as recommended by the contiguity principle [9, 10]. The questions were designed to allow students to discuss theoretical content that might have seemed difficult to discern through class presentations. Therefore, the students were guided to learn to appreciate the work from the fundamentals and verify the way they approached the problems against the theoretical background essence, instead of just doing meaningless engineering calculations. The student would be allowed to attempt a tutorial activity with immediate feedback. Moodle questions can be programmed for automatic grading, allowing you to see cross and check marks showing successful and failed attempts. The feature was pointed out to students as an opportunity to engage in constructive discussion and debate. Cross marks (wrong results) would encourage the platform for students to review their fundamentals and theoretical content instead of engaging in a blind pursuit of satisfying check marks. Special emphasis was placed on preventing students from being tempted to play guesswork, especially in cases where the feedback provided the expected answers. It is common practice in situations where textbooks provide the expected final answers for students to focus on the answers rather than discuss their way of correcting the results and end up missing out on the opportunity to learn. Subsequent attempts may be allowed for the student to verify the result after their foundation review, through additional reading, peer discussion, and consultation with the teacher. Learning was expected to flourish from the iteration process described. 6. CONCLUSION The objective of this article is to discuss the provisional results of an observation that is being carried out on the relevance of the effective use of other modes of teaching and learning that technology offers, in the particular case of managing large class groups. A comparison of the pass rate between groups of students participating in traditional classrooms (group 4) and those exposed to the blended learning mode that includes an online component. Admission to the final exam, at the end of a study period, is not automatic. Students are exposed to a series of tests and must pass them to gain admission to the final exam. The pass rate is calculated as the proportion of students passed over the total number admitted in the final exam. The following observations were made after the second semester of 2009: 43% of students in group 1 were allowed to write the final exam and 91% in group 2. Groups 1 and 2 produced a 100% pass rate. 30% of the students in Group 3 qualified to write their final exam, while Group 4 allowed 37% of their population to participate in the exam. However, 75% of passing group 3 students passed the final exam, while 61% passed group 4. It appears that students exposed to online content (groups 1, 2 and 3) in addition to face-to-face classes normal seem to be relatively better prepared. The results of previous observations showed that 31% of the population was exposed to additional online support and could generate 53% of the overall approval rate in the population of the sampled class. The result is even more interesting for group 2 that joined the classes via videoconference, where the students did not have any physical contact. Speculation is whether the lack of physical contact with the teacher prompted them to more personal initiative, better time management, and effective use of all available online resources. There are still more observations being made before valid conclusions can be drawn and established as an indication for large population classes. 7. REFERENCES [1] Stephen E. Merrill et. al, A Vision of E-Learning for America s Workforce, Report of the Commission on Technology and Adult Learning, 2001 [2] Tim L. Wentling, Consuelo Waight, James Gallaher, Jason La Fleur, Christine Wang and Alaina anfer, e- learning - A Review of Literature Knowledge and Learning Systems Group, University of Illinois at Urbana-Champaign, 2000 [3] Abernathy, D. J. (1998). The WWW of distance learning: who does what and where? Training and Development 52 (9) p [4] Filip J.R.C. Dochy and Liz McDowell, Assessment as a Tool for Learning, Educational Assessment Studies, Vol. 23, No. 4, pp , 1997 [5] Filip Dochy, A new era of assessment: different needs, new challenges, Research Dialogue in Learning and Instruction 2 (2001) [6] M. Birenbaum, K. Breuer, E Cascallar , F. Dochy, Y. Dori, J. Ridgway, R. Wiesemes (Editor), G. Nickmans (Editor), A learning Integrated Assessment System, Educational Research Review 1 (2006) [7] Gibson, C. ( nineteen ninety six) . Towards an understanding of the self-concept in distance education, American Journal of Distance Education, 10(1), [8] Hardy & Boaz, 1997; Baker, 1995), cited by Tim L Wentling, 2000 [2] [9] Ruth Clark, Six Principles of Effective e-Learning: What Works and Why, The elearning Guild s. Practical Applications of Technology for Learning, LEARNING SOLUTIONS e-Journal, 10 Sep 2002 [10] Sutherland, G. A Curriculum Framework for a National Diploma Introduction Program: Engineering at Vaal University of Technology. University of Stellenbosch (PhD in Curriculum Studies), 2009 [11] Sabine Dieric and Filip Dochy, New lines in edumetry: New forms of assessment lead to new assessment criteria, Studies in Educational Assessment 27 (2001)

225Planning and the Novice Programmer: How Grounded Theory Research Can Lead to Better Interventions Jonathan Wellons, Julie Johnson Department of Electrical Engineering and Computer Science Vanderbilt University {jonathan.wellons, Abstract Planning is a critical initial step in the path to writing and a skill often lacking in novice programmers. As professionals, we are continually looking for or creating interventions to help our students, particularly those who are struggling in the early stages of their computer science education. In this paper, we report on our ongoing research on novice programming skills that uses the qualitative research method of grounded theory to develop theories and inform the construction of these interventions. We describe how grounded theory, a popular research method in the social sciences since the 1960s, can bring formality and structure to the common practice of simply asking students what they did and why they did it. Furthermore, our goal is to inform the reader not only about our emerging theories about planning interventions, but also about how they can collect and analyze their own data in this and other areas of concern to novice programmers. In this way, those who give lectures and design CS1 interventions can do so from a more informed perspective. Index Terms Novice Programmers, Planning, Qualitative Research, Grounded Theory I. INTRODUCTION Much research exists in the area of ​​self-regulated learning and its effects on student performance. Students who report exercising skills such as goal setting, planning, self-control, and self-assessment experience higher levels of success and satisfaction than students who do not [1]. For programmers, planning is one of the first critical self-regulation skills they will need. Early programming experiences are defined by a novice's ability to engage in complex cognitive problem-solving processes while employing metacognitive self-regulation processes. While advice for teaching problem solving abounds, only recently have we seen developments in software tools and learning modules that address the planning process. The overall goal of our research is to design activities and scaffolds to teach and support the metacognitive skills that novice programmers need to achieve early programming success. The focus of this qualitative study was to use a systematic approach to observe and explain the planning process among novice programmers at an undergraduate university. We begin by analyzing the interview data with the goal of developing emerging theories. Such theories and subsequent test results (not part of this study) will lead to the development of tools to help students improve and refine this fundamental self-regulation skill. More specifically, we ask: What is the theory that explains the planning process among novice programmers with no formal training in making such plans? and How might such a theory inform the construction of scaffolding and learning tools for novice programmers? To address these questions we employ a grounded theory study. This methodology is common in the social sciences and is appropriate when trying to develop the foundations for theories avoiding pre-existing biases. Our goal is to produce data that can be qualitatively examined for connections between novice scheduling planning, the causes or effects of their planning abilities, and other habits or tendencies on the part of students. II. THE ROLE OF PLANNING IN THE PROGRAMMING PROCESS Developers use plans in large-scale projects to model a program at a manageable level of abstraction [2]. It is universally accepted that successful programming requires planning and many different strategies are in use today. The general model first put into widespread use is the waterfall model, which consists of various stages of requirements, design, implementation, testing, and maintenance [3]. Gradually, more flexible models were applied to software, beginning with the idea of ​​iterative and incremental development, a cyclical process that cascades model elements multiple times, allowing for adaptation [4]. In recent decades, many of the same principles have reappeared in slightly different guises, such as the spiral model [5] and, more recently, extreme programming, agile programming, and test-driven development. Extreme programming is characterized by frequent release cycles, pair programming, extensive unit testing, and flexible schedules [6], [7]. Agile programming emphasizes open and frequent communication, adaptability, customer stakeholders, and cross-functional teams [8], [9]. The principle of test-driven development is to first institute a unit test before coding each feature or bug fix [10]. Many other ideas have been put into practice, such as cleanroom software engineering, which was developed to provide reliable and verifiable software [11], and lean software development, which aims to eliminate waste in all its forms (excess waste). bureaucracy, requirements, code, delay, etc.). It is clear that to participate in the design, management, development, and testing of large-scale projects, students must develop the necessary planning skills that productive computer scientists and engineers exhibit. 3rd PLANNING RESEARCH In his 1986 article, Soloway called for a redesign of the Computer Science curriculum to include the explicit teaching of 189

226Problem solving skills that included planning and goal setting. He noted that expert programmers drew from a library of canned solutions to form a template or starting point for their solutions. Soloway proposed using the goals/plans language to teach introductory programming, thus making the role of plans and goals more explicit to the novice from the start [12]. In [13] it was shown that teaching scheduling and planning strategies was possible, did not increase the time needed for instruction, and could be measured through a written assessment. In response to these reports, various tools have been designed and tested in an effort to support strategy planning and selection using the expert programmer approach as a model. In [14] 25 college students in two randomly assigned groups received training in planning by one of two methods. The treatment group used a smart tutoring system (ProPL) to implicitly structure the planning through the use of prompts, while the other group received clickable text describing the planning and strategies for programming. Their results demonstrated the value of a scaffolded approach to teaching planning skills, as the ProPL group did better on assigned programming tasks than the other group. In [15], [16] the authors describe two object-oriented programming languages ​​(Visual Plan Construct Language and Web Plan Object Language) designed to teach programming to beginners through plan management and integration. These languages ​​facilitate the Plan-Object Paradigm, an approach that gives context to programming objects by allowing students to first create a plan and then use that plan to create work programs. In a broader approach, [17] explored the scaffolding for scientific inquiry and the needs of students engaged in new and complex work processes, an appropriate description for novice programming. The authors evaluated Symphony, a software tool intended to scaffold student planning activities. They used the artifacts created by the tool to analyze the complex process of scientific inquiry, identifying learner needs that could be further scaffolded. In this way, the tool itself became a means by which further research could be conducted. Despite these advances in supporting the novice programmer, seventeen years after Soloway's article, [18] reported that strategies and plans, while crucial to learning outcomes in introductory programming courses, still receive a lot of attention. less attention than knowledge related to language. They also noted that the questions of why and how different strategies arise and how they relate to underlying knowledge remained an open question. Introductory programming classes differ in their emphasis on planning. Those who encourage it often do so in different ways. One class might teach writing program comments before any code, and a different class might teach top-down modularization. Students exposed to a limited planning style or not exposed to planning at all may experience deceptive initial success with small projects, but later struggle. Basic programming tasks are notoriously difficult for students to master and many potential careers drop out of computer science or fail to acquire essential skills (examples from Australia, USA and UK in [19], [20 ]). Given that programming is an essential skill in many engineering disciplines (many with a shortage of employees) as well as advanced courses, and that a huge diversity of educational approaches are employed, it is natural to wonder if there is a better way to teach planning. Kuhl and Goschke [21] proposed a model for self-regulated learning that includes goal setting and planning steps. His model was recursive; students returned to the tasks again and again as they experienced internal feedback from the products they generated. This recurrence becomes more apparent in the programming experience as students receive explicit feedback from the compiler or debugger used when creating a program. Error messages and programs that do not terminate are external signals to the student that a change is required. With little experience to draw from, reflection is often limited to the code itself, and incremental changes are made in an effort to improve the output. Changes often lead to incremental success, which in turn contributes to progress. When progress leads to completion of the program, this abbreviated reflective loop, completely contained within the coding exercise, becomes a template for future programming assignments. As programming problems become more complex, progress may slow or stop significantly, signaling to the student that reflection on the larger plan may be in order. Whether this prompt is accepted or ignored, in the presence or absence of an initial plan drawn up by the novice, it may shed some light on the initial planning process. IV. THE USE OF GROUNDED THEORY AS A METHOD OF ANALYSIS Grounded theory is a form of qualitative research based on the formation of theory from data. Open-ended interview questions are posed and data is collected in an effort to generate theories about the domain in question. This methodology can be described in five steps: 1) data collection, 2) open coding by which researchers assign discrete codes to qualitative data, 3) grouping of codes into concepts and identification of one or more important concepts that deserve further analysis. reexamination of existing data and possibly further data collection to facilitate model building, 4) axial coding, that is, the construction of categories that highlight the relationship between concepts, and 5) the suggestion of one or more theories that describe the relationships. Grounded theory researchers cast a wide net to capture a diverse and multidimensional data set that can be fertile ground for theories [22]. Doing everything possible to avoid the influence of previous theories or other constructions, researchers allow the data to form the theory rather than use existing theories to code and categorize the data. As university system professionals, we routinely collect data from our students in an effort to measure both the effectiveness of our teaching and the impression it makes on the students themselves. Performance data along with course evaluations serve as a reliable record of course results. Open-ended questions are often posed to students in an effort to gather qualitative data to inform future changes to the course (What if something would change about this course?) or to elicit a reaction to recent changes (How did you use planning in line?modules when completing 190

227programming tasks? ). When we use this data to improve the course, we are using some of the techniques formalized in grounded theory. We synthesize this data with performance data, anecdotal evidence, and past experience to guide our next steps. This type of approach is the basis of research methods such as design-based research or action research. In contrast, grounded theory formalizes the process of data analysis and produces a theory or set of theories that can then be used to develop course modifications, controlled experiments, or future research in a larger context. Qualitative research has been used to investigate novice scheduling, although grounded theory has not been specifically applied to scheduling. Interviews and coding are used in [23] to investigate how non-core programmers conceptualize Java concepts. To investigate whether some elementary programming tasks are more difficult than others, [24] looked for bottlenecks for novice programmers in object-oriented programming. The students were observed during the laboratories and their affective states and behaviors were coded. The compiler error logs along with the interviews were used in [25] to track the most common errors made by novice programmers. Like those involved in the BRACElet Project and others, we believe that CS curriculum challenges should be approached as research problems that require established research methods [26]. We chose to use the grounded theory approach for several reasons. First, we wanted to use a data collection method that was already familiar to us and many professionals in the field of computer science education, and that did not require additional equipment or instrumentation. Second, the systematic approach to data analysis appealed to us as computer science educators, acting as a transition to other, less familiar qualitative research methods. And finally, we wanted to demonstrate how the student interviews could be used for more than just the course they were commenting on, but could also provide insight into more general aspects of the programming experience. V. METHOD Our sample consisted of volunteers from three sections of an Introduction to Engineering class offered at a research university in the United States. The course, required for all first-year engineering students, was delivered in a four-week module consisting of approximately 14 one-hour lectures. This course introduces the fundamentals of programming (using MATLAB) within the context of cryptography. The general idea behind the course is to familiarize students with problem-solving techniques and tools (such as MATLAB and Excel) while giving them an overview of the field of Computer Science and some of its practical applications. The volunteers came from a variety of backgrounds and not all necessarily had a Computer Science major. They were compensated for their time and reflected roughly the same breakdown by gender as class. None of those who volunteered were denied inclusion in the study. Because the goal of initial data collection is to collect as many different stories and experiences as possible, thus saturating each category with explanations and examples, random sampling is not as critical in grounded theory. In fact, in our discussion section, we describe future data collection that will involve theoretical sampling, the selection of data based on the potential to represent the core theoretical constructs being studied. To collect accurate data, volunteers' names were removed from their interviews. We asked open-ended questions and encouraged students to discuss any aspect of their programming experience that they found significant. The interviewer who collected the data maintained office hours and gave a guest lecture in two or three class sessions from each of the three sections to develop rapport with the students and introduce the study. All interviews were voice recorded for later analysis. Volunteers were encouraged to describe their experiences during various assignments, what type of plan they created, how detailed it was, and how it was adapted. Subjects were asked about their programming experience, hobbies, and other details that could lead to a planning theory. Several hundred codes were derived from the data, which were in turn placed on 14 concepts. Finally, we organized the concepts into five categories that naturally suggested the three theories that comprise our results, as detailed in Sec. VI. SAW. RESULTS Concepts are derived organically, based solely on the codes that are present. The interviews explicitly asked about the content and complexity of the student's plan. Examples include: I made a to-do list, started typing and wrote the comments, and then completed the code. These are grouped in the concept of Initial Plan. In addition, we ask how the plan performed when the student attempted to implement it and whether the plan evolved. One subject reported that everything fit quite well. Another said that I realized that he did not need some things. I had a hard time getting the alphabet [substitution cipher] to work. These are codified in the concept of Adequacy of the Plan. We invite students to describe their programming process. One subject reported that he had difficulty keeping track of row, column, index, etc. variables. Another topic... was based on old programs and examples. Other students reported difficult language features and the mechanical details of their run and debug cycle. These codes are grouped in the concept of Coding Process. Subjects were asked about sources of help during programming assignments. Many reported seeking help from classmates and friends who were advanced engineers or had previous programming experience. Many students used Google, MATLAB online or integrated help, the teacher, or the textbook. These codes are grouped under the concept of help sources. All subjects are asked about their debugging process. One student said that after working on a frustrating bug, I took a day off to clear my head and then went back to see if it was okay. Some students ran their program again after each newly added line to check for errors. Others ran it only when they thought the programming was over. A handful of students used debugging output statements. There was considerable diversity in the types of test cases used. Some students used only the given cases in the task, others gave random test cases. These codes are in the Testing concept. 191

228TABLE I SAMPLE CODE AND THE CORRESPONDING CONCEPT Sample Code Initial plan was a short and vague list of tasks Plan failed because MATLAB does not handle long strings Program was built with snippets adapted from examples in class Googled help, for example, how to write a for loop I wrote the program in 3 or 4 parts that were tested separately and combined at the end. My goal was to receive either a B or an A. It took me 2-2.5 hours to finish. The student thought it would take 1 hour. The first day was too much, the second day I started to understand, it clicked on the third day. The programming background consists of using the Starcraft map editor. a number cruncher in everyday life Why go for the extra credit when I don't understand the basics? Instead of writing a time-consuming brute force program, solved the permutation puzzle visually out of 362,880 possibilities Influenced lab partners to use pseudocode in the future Concept assigned Initial plan Plan adequacy Coding process Sources of help Testing Goals Time Required Class Experience Schedule Background Quantitative Background Hobbies Ambition Lateral Thinking Personality Subjects were asked about the type of goals they set for the task. Some students focused on grades, one reporting a B or an A. Another said just cut it. Other goals were to finish before the weekend and initially I just wanted it to work, but then I wanted to satisfy my intellectual curiosity. Several students reported that they enjoyed the task and that no external motivation was necessary, but only one reported that their goal was to learn MATLAB. These codes were placed in the concept of Goals. Students were asked how long it took them to complete the task. Most responses were between 1.5 and 3 hours. Subjects generally found that it took longer than they expected, although there were exceptions. All codes related to the time the task took and how it compared to the student's previous expectations were combined into the concept of time needed. The students were asked for information that described their educational experience in the course that we classified under the concept of Class Experience. One reported that the scavenger hunt [homework] was fun because it required intelligence. Another answer: Oh the hash code, that was frustrating! Other students reported that the class was moving too fast or that the examples were not related. One subject said that he was lost on the first day, began to understand on the second day, and fit in on the third day. We developed another concept that measures ambition from reactions to a cryptography task that required students to choose between four algorithms with varying degrees of difficulty. Each choice was accompanied by a maximum number of possible points, ranging from 110 for the most complex algorithm to 87 for the simplest. Various intermediate options were also offered (such as input restrictions or UI features) that could increase the point value of the attempt. A student's choices can be perceived as a measure of her self-confidence. Many students aimed low. One reported, why go for the extra credit when I don't understand the basics? Others opted for combinations greater than 100, but not the maximum possible. These codes are placed under the concept of Ambition. 192 Interviews included questions about each subject's programming background to uncover a relationship between a student's prior experience and planning or assignment success. Many students in the sample had little or no programming experience. The most extensive background came from AP classes in Java and toy problems on an educational website. Another subject had learned MATLAB and C++ over the summer. One subject had used a script-based map editor for the game Starcraft. These codes were grouped under the concept of Programming Background. Subjects also discussed their experience in quantitative studies. Many students reported enjoying and excelling in Mathematics. One said that math is my best subject. A handful of students were ambivalent about Math, I went to AB Calculus because BC was like boot camp. These codes were grouped into Quantitative Background. The students were asked to describe their hobbies. Many students became interested in strategy, board, card, and video games. Only one reported sports. One reported, Piano, writing poetry and Chess. Guitar playing was mentioned. These codes were grouped under the Hobbies concept. In the course of describing their problem-solving plans, students often revealed insightful solutions. One problem required deciphering a message encoded with nine possible factorial keys (362, 880). Instead of writing the suggested brute force program in class, several students were able to crack it with pencil and paper, or use creative shortcuts that reduced the complexity of the program to write. A student solved this puzzle in Excel using built-in functions. We label these codes with the concept Lateral Thinking. During the students' description of their problem-solving techniques or group work, aspects of their personalities were revealed. A big proponent of the use of pseudocode reported that she had influenced her teammates to use it for the following individual tasks. When codes of this type became evident, they were classified under the concept of Personality. In this phase of the project, 14 concepts emerged, such as

229natural partitions to the codes as shown in Table I. We then transition to the axial coding step and add concepts into categories based on similarity. Five categories emerged as ideal groupings, as shown in Table II. The organization of the data is ascending and is reflected by placing the secondary data on the left and the main data on the right. VIII. EMERGING THEORIES Our goal in grounded theory is to examine the apparent connections between categories to suggest theories that might explain the data before us. The selective coding process requires the selection of a core category. The connections are then studied in an attempt to define the relationships between all the other categories and the chosen core category. From this rich set of codes and concepts, many relationships are possible. For example, connecting the Planning and Scheduling Methodology categories, we found that plans that included testing were associated with shorter total time spent on the program. The relationships between planning and goal setting and achievement included a connection between planners and the scope of their ambitions in the programme. Other connections that emerged include the relationship between students using lateral thinking and a lower level of frustration during their programming experience. Expanding any of these relationships into a working theory would require theoretical sampling followed by additional coding in an effort to saturate the data related to the categories that are related. This exercise would allow us to strengthen the proposed theory. Ideal theories will not only be supported by data, but will potentially lead to research to improve pedagogy for novice programmers. In keeping with the conventions of grounded theory, we want to avoid the problem of existing theories or statements influencing our analysis of the data. However, we drew on the existing literature to identify gaps that could be informed by our work. Skilled planners have accumulated a library of templates that they can flexibly use and apply to the problem at hand [12]. Among novice programmers, for whom such a mental library probably does not exist, we do not know what form the planning process takes. Research from the data suggests that novice programmers borrow ideas from their areas of relative expertise. Students explicitly referred to their mathematical knowledge or experience writing papers when describing the origin of their plans. This brings us to our first candidate theory: Theory I: Novice programmers try to adapt problem-solving strategies from other domains, such as math or essay writing. To guide us to the next theory, we investigate how to build a scaffold that can serve in the place of this expert library of templates until it can be established. According to the data, students who wrote pseudocode were more likely to perceive task success, have higher ambition, and have a proper plan. This is true regardless of whether the student had programming experience, which leads us to consider that the traditional skill of pseudocoding may be the only scaffolding needed for first-time programmers, at least until they have successfully completed a few tasks. Furthermore, pseudocode can be taught to be used effectively, which leads to the following theory. Theory II: Scheduling by pseudocode is feasible for novice programmers. Equally important for CS1 teachers is how to develop a sense of perceived success in novice programmers. Based on various comments on the quality of a student's learning experience that show a relationship between planning and perceived success in the class as a whole, we formulate the final theory: Theory III: Students who planned their programs are more chances of reporting a positive result. experience in the class. VIII. LIMITATIONS AND FUTURE WORK Within the scope of grounded theory, which is the production rather than the testing of theories, this work has several areas for further development. The students were from a single class intended for students seeking an engineering career. Therefore, the results do not immediately apply to computer science majors or minors because it is not known whether both sets are well represented in the data. Furthermore, the student sample is somewhat small and self-selected. The next step is theoretical sampling, which deliberately chooses samples to diversify the data set. The authors are likely to theoretically test different backgrounds and expected specializations, as well as a class with a different style of teaching and programming language. During theoretical sampling, the original data will be added instead of replaced. Practitioners divide teaching research into two classes: What is and What works. What is research focuses on observations about current conditions and processes in the learning environment. What Works investigates and measures alternative teaching practices. This is a What is project that attempts to suggest relationships between program planning for beginners and other concepts for further study. In addition to using grounded theory to produce theories qualitatively, future work consists of two types: refining these candidate theories and developing What Works projects to realize the benefits of proving or disproving these theories. Having established three candidate theories, the next phase of our research is to collect more qualitative data to either strengthen or deny each of them. Strengthening of Theory I and II could lead to guidelines for developing learning modules and subsequent scaffolds designed specifically for the novice programmer, while further exploration of Theory III would naturally lead us to a quantitative study to verify the proposed relationship. Each candidate theory has the potential to affect the way we teach the metacognitive skill of planning and the emphasis we place on that exercise. IX. CONCLUSION AND SUMMARY In this article, we have applied grounded theory to interviews with novice programmers about their first programs. Through the principles of grounded theory we have coded, conceptualized and categorized the interview data. We have elucidated the connections between categories to generate several plausible theories that explain the data. Three candidate theories are

230TABLE II CONCEPTS ASSIGNED TO EACH CATEGORY Concepts Initial Planning, Adequacy of the Plan Coding Process, Help Sources, Goal Tests, Time Required, Class Experience, Ambition Programming Background, Quantitative Background Hobbies, Lateral Thinking, Personality Assigned Category Planning Programming Methodology Goal Setting and Achievement Background Personnel proposed in this project, based on the observed relationship between planning strategies and self-reports of programming experience. The theories are: 1) Novice programmers try to employ problem-solving strategies from other domains with which they are more familiar, 2) Pseudocode-based planning tends to be a relatively successful strategy for novices, and 3) Program planning leads to to a positive report. class experience. X. ACKNOWLEDGMENTS We are grateful for the support of the Center for the Integration of Research, Teaching and Learning (NSF Grant No. DUE ) project and the Qualitative Research Methods Workshop (NSF Grant No. DUE CCLI ). Without the support of these two projects, this research would not have been possible. REFERENCES [1] B.J. Zimmerman and M. Martinez-Pons, Development of a Structured Interview for Assessing Student use of Self-Regulated Learning Strategies, American Educational Research Journal, vol. 23, pp., [2] C. C. Yu and S. P. Robertson, Plan-Based Representations of Pascal and Fortran Code, Proceedings of SIGCHI, May [3] Winston W. Royce, Managing the Development of Large Software Systems, Proceedings of IEEE WESCON , p. 19, [4] Larman C. and V.R. Basili, iterative and incremental developments. a brief history, computer, vol. 36, no. 6, pp, June [5] B. Boehm, A spiral model of software development and improvement, SIGSOFT Softw. Eng. Notes, vol. 11, no. 4, pp. [6] Mark C. Paulk, Extreme Programming from a CMM Perspective, IEEE Software, vol. 18, pp , [7] Kent Beck, Extreme Programming Explained: Embracing Change, Addison-Wesley, [8] Agile Alliance, Manifesto for Agile Software Development, [9] Orit Hazzan and Yael Dubinsky, Why Software engineering programs should teach Agile Software Development, SIGSOFT Softw. Eng. Notes, vol. 32, no. 2, pp. 13, [10] Kent Beck, Test-Driven Development by Example, Addison-Wesley, [11] H.D. Mills, M. Dyer, and R.C. Linger, Cleanroom Software Engineering, Software, IEEE, Vol. 4, no. 5, pp, [12] E. Soloway, Learning to Program = Learning to Build Mechanisms and Explanations, ACM Communications, Vol. 29, no. 9, pp , [13] Michael de Raadt, Richard Watson and Mark Toleman, Teaching and Assessing Programming Strategies Explicitly, Eleventh Australasian Conference on Computer Education (ACE2009), vol , January [14] Kurt Vanlehn and H. Chad Lane, Teaching the Tacit Knowledge of Programming for Beginners with Natural Language Tutorial, Computer Science Education, Vol. 15, pp, [15] A. Ebrahimi and C. Schweikert, An Empirical Study of Beginner Programming with Plans and Objects, SIGCSE Bull., vol. 38, no. 4, pp, [16] Alireza Ebrahimi, Novice Programmer's Mistakes: Language Constructs and Plan Composition, Int. J. Hum.-Comput. Stud., vol. 41, no. 4, pp., [17] Chris Quintana, Jim Eng, Andrew Carra, Hsin-Kai Wu, and Elliot Soloway, Symphony: a Case Study in Extending Learner-Centered Design through Process Space Analysis, in CHI 99: SIGCHI Conference Proceedings on Human Factors in Computer Systems, New York, NY, USA, 1999, pp., ACM. [18] Anthony Robins, Janet Rountree, and Nathan Rountree, Learning and Teaching Programming: Review and Discussion, Computer Science Education, Vol. 13, pp , [19] Nghi Truong, Peter Bancroft and Paul Roe, A web-based environment for learning to program, in ACSC 03: Proceedings of the 26th Australasian Computer Conference, Darlinghurst, Australia, 2003, pp , Australian Computer Society, Inc. [20] Michael McCracken, Vicki Almstrum, Danny Diaz, Mark Guzdial, Dianne Hagan, Yifat Ben-David Kolikant, Cary Laxer, Lynda Thomas, Ian Utting, and Tadeusz Wilusz, A Multi-National, Multi -Institutional Freshman Computer Science Student Programming Skills Assessment Study, in ITiCSE-WGR 01: ITiCSE Working Group Reports on Innovation and Technology in Computer Science Education, New York, NY, USA, 2001, pp., ACM. [21] J. Kuhl and T. Goschke, Volition and Personality, Hogrefe and Huber, [22] Patricia Yancey Martin and Barry A. Turner, Grounded Theory and Organizational Research, The Journal of Applied Behavioral Science, vol. 22, no. 2, pp. 141, [23] Anna Eckerdal, Anna Eckerdal, and Anna Eckerdal, Novice Students Learning Object-Oriented Programming, [24] J. O. Sugay, M. M. T. Rodrigo, R. S. J. Baker, and E. Tabanao, Monitoring Novice Programmers' Affects and Behaviors to Identify Necks Bottleneck in Learning, Computer Society Congress of the Philippines, March [25] Suzanne Marie Thompson, Exploratory Study of Experiences and Programming Errors for Beginners, March [26] Tony Clear, Jenny Edwards, Raymond Lister, Beth Simon, Errol Thompson and Jacqueline Whalley, The Teaching of Novice Computer Programmers: Bringing the Scholarly-Research Approach to Australia, in ACE 08: Proceedings of the 10th Australasian Computer Education Conference, Darlinghurst, Australia, 2008, pp , Australian Computer Society, Inc 194

231Hardware Resources in Teaching Digital Systems Yimin Xie, David Wong and Yinan Kong Department of Physics and Engineering Macquarie University Sydney, NSW 2109, Australia ABSTRACT This document provides an overview of all the hardware resources required to support the delivery of a sequence of courses in digital technology. systems that cover from digital fundamentals to the hierarchical design of complex digital systems. An example of an effective approach to engineering education is shown through the use of these resources. Keywords: Digital Systems, Engineering Education, Hardware Design and Problem Based Learning. 1. INTRODUCTION The scope of the materials that are covered in a sequence of courses in digital systems is very wide, and therefore, adequate hardware resources to support teaching are essential. Hardware trainers have been developed to satisfy this requirement in the authoring department. Some trainers have been designed with a focus on a specific digital concept, while others have a much broader application and provide resources for problem-based learning. Both types of trainers are described in this document. The four courses involved in teaching digital systems consist of Digital Fundamentals (in the first year), Programmable Logic Design (in the second year), and Digital Hardware and Systems Design (in the third year). The scope of topics covered in the four courses is represented by the textbook content and references shown below in the Table CONCEPT SPECIFIC TRAINERS This set of hardware is developed for the Digital Foundations course. It should also be used for classroom demonstrations, as well as large events like college open houses. All of this requires that the concept-specific trainers be easy to set up, and also that the PCB must be fully operational after power supply (and clock) connection. It should not require any additional wiring (or, in some cases, a minimal amount of wiring), and all relevant signals should be labeled and monitored by Light Emitting Diodes (LEDs). For this reason, a typical trainer of this type was designed to include small, interactive printed circuit boards (PCBs) that focus on digital concepts of parity, multiplexing, adders/accumulators, flip-flops/counters, data bus, and frequency registers. displacement. The advantage is then realized that the printed circuit board is activated by setting switches or pressing buttons (to provide a single clock pulse) and monitoring signals using the LEDs or a logic analyzer. This exactly satisfies the requirement for a simple class demonstration and also achieves the goal of stimulating the interest of new college students in digital system design. Figure 1 shows three of these PCBs. Table 1 Author(s) of the text/reference books Course author(s) Text/reference book Digital Fundamentals Text by T. Floyd [1] Digital Fundamentals R. J. Tocci, N. S. Widmer and G. L. Moss Reference [2] Design Logic programmable T. Floyd Reference [3] Computer Hardware P. Spasov Text [4] Computer Hardware R. J. Dirkman and J. Leonard Reference [5] Digital Systems Design C. H. Roth and L. K. John Text [6] 195

232(a) Parity (b) Data bus (c) Multiplexing Figure 1: Examples of concept-specific trainers The present set of trainers has been implemented with medium-scale integrated circuits. Other concepts (such as error-correcting codes) have been considered and will likely have an implementation using complex programmable logic devices (CPLDs). 3. GENERAL PURPOSE TRAINERS General purpose trainers focus on small and medium scale logic circuits and integrated circuits (ICs). This type of trainer provides basic building blocks that can be interconnected via patch cords to build combinational or sequential circuits of simple or modest complexity. This is exactly the type of techniques that students should be equipped with in their second and third years. Two such versions have been used in experiments to enrich students' hands-on experience in circuit design. The first consists of a top-layer printed printed circuit board displaying distinctive shapes of gates and flip-flops with associated sockets for inputs and outputs that can be physically interconnected. Each output is monitored by an LED, and the built-in circuits can be activated using switches and buttons. The second version is a multi-socket Zero Insertion Force (ZIF) PCB for small and medium scale ICs. This version gives students the experience of selecting and handling a wide range of integrated circuits. Figure 2 shows the two versions of general purpose trainers described as the Digital Fundamentals Trainer and the IC Trainer. The IC trainer contains only two 20-pin ZIF sockets. Circuits of greater complexity can be handled by the GAL8 Trainer which has eight 24-pin ZIF sockets. (a) Digital Fundamentals Trainer (b) Integrated Circuit Trainer Figure 2: General Purpose Digital Trainers 196

2334. PROGRAMMABLE LOGIC TRAINERS Programmable Logic Trainers are trainers that take advantage of the flexibility and immense functional capacity of Programmable Logic Devices (PLD). They have been used in the experimental sessions of the Programmable Logic Design and Digital Systems Design subject. Programmable Logic Trainers have been developed using Generic Array Logic (GAL) devices and Field Programmable Gate Arrays (FPGA). The GAL devices were chosen because their architecture, consisting of a programmable AND matrix followed by an OR gate and flip-flop, with feedback connections from the flip-flop to the AND matrix, provides students with a smooth transition from the finite state machine (FSM) concepts. ) studied in the previous course. Modern FPGA devices have a high functional capacity and this type of PLD is the logical choice for the implementation of complex digital systems. In addition, the mandatory use of computer-aided design (CAD) software introduces students to the most advanced procedures used in the industry for the implementation, simulation, and synthesis of FPGA-based systems. Figure 3 illustrates programmable logic trainers. The GAL4 trainer consists of a printed circuit board with four 20-pin ZIF connectors for four GAL16V8 devices. The GAL8 trainer consists of a PCB with eight 24-pin ZIF connectors for eight GAL22V10 devices, along with a microcontroller and variable clock generator. An important feature of this type of trainer is that all outputs can be labeled and monitored with LEDs. In the case of the GAL8 trainer, there are 80 LEDs and 32 signals can be monitored by a 32-channel logic analyzer. The FPGA Trainer consists of an FPGA development board (Spartan-3) connected to a desktop computer with input and output boards. This system provides the ability to design and synthesize complex digital systems. The Xilinx Spartan 3 FPGA has sufficient capacity to design and build complex systems such as those developed for thesis projects (for example, [7]). 5. MICROCONTROLLER TRAINERS Microcontroller Trainers have hardware and software resources that give students the experience of exercising and developing microcontroller-based systems. The microcontroller interface is important and experiments should cover parallel ports, serial ports, interrupts, timing, and digital/analog inputs/outputs. Since bit-level manipulation is required, especially for a memory-mapped I/O architecture (such as that of the Motorola 68HC11 microcontroller), some experience with assembler coding is required. The microcontroller trainer consists of a microcontroller development board connected to a desktop computer and a specially designed input and output PCB. This is supported by a series of experiments on the computer interface. the 68HC11 evaluation board and input/output board are shown in Figure 4. (a) GAL4 Trainer (b) GAL8 Trainer (c) FPGA Trainer with I/O Cards Figure 3: Programmable Logic Trainers 197

(Video) The difference between ISBN, Barcodes, LCCN, & copyright | Which does your self-published book need?

234(a) 68HC11 Evaluation Board (b) Input/Output Board Figure 4: Microcontroller Trainer 6. HARDWARE RESOURCES FOR PROBLEM-BASED LEARNING Problem-Based Learning (PBL) was a feature of a new course on Logic Design Programmable [8]. For this course, the main project involved the design, construction, and testing of a digital controller to control traffic signals at a complex traffic intersection. A printed circuit board with LEDs in positions corresponding to those of the physical intersection was prepared to facilitate testing. This is shown in Figure 5(a). To support those teams that may choose to implement VHDL, a new PCB has been developed containing a CPLD and a microcontroller, together with a model of the intersection. This is shown in Figure 5(b). A PBL approach has also been used in the Digital Systems Design course, where the main project is the development of a bus structure computer. A hierarchical design approach has been used and the component modules are designed, built and tested using the GAL8 trainer. To facilitate prototyping, a printed circuit board with static RAM and tri-state buffering has been built. This is shown in Figure 5(c). This PCB can also be used when choosing an FPGA implementation that uses VHDL. The hardware resources for PBL have helped students acquire the skills necessary to develop complex digital systems. 7. SOFTWARE RESOURCES GAL-based experiments use OPALjr. This allows the use of Boolean algebra for the specification of combinational circuits for GAL outputs and flip-flop inputs. The microcontroller software used is the AS11 cross assembler. FPGA experiments use Xilinx ISE Version i software (Xilinx, 2009) for schematic/vhdl input, simulation, and synthesis. (c) RAM and tri-state buffer (a) Traffic intersection (b) CPLD and traffic intersection Figure 5: PBL 198 hardware modules

2358. CONCLUSION The hardware resources developed for teaching digital systems are presented. Concept-specific and general-purpose logic trainers were developed for introductory courses, programmable logic trainers were aimed at handling complex digital systems, and microcontroller trainers were specially designed for the microcontroller teaching course. In addition, dedicated hardware has been implemented to help with problem-based learning. All of these hardware resources have found their place and are playing an important role in the continuum of teaching digital systems. They help to integrate a systematic stream of digital systems in the teaching of Electronic Engineering in the author department. [1] T. Floyd, Digital Foundations, 10th ed., Pearson, [2] R. J. Tocci, N. S. Widmer, and G. L. Moss, Principles and Applications of Digital Systems, 10th ed., Pearson, 2007 [3] T Floyd, Digital Fundamentals with PLD Programming, Pearson, [4] P. Spasov, Microcontroller Technology The 68HC11 and 68HC12, 5th ed., Pearson, [5] R. J. Dirkman and J. Leonard, 68HC11 Microcontroller Laboratory Workbook, Prentice Hall, [ 6] C. H. Roth and L. K. John, Design of Digital Systems Using VHDL, Thomson, [7] A. Pattison-Clarke, The Design, Implementation, and Operation of a Computer Peripheral Controlled by a Field-Programmable Gate Matrix for Data Acquisition , processing and visualization of digital data Streams. BE Thesis, Macquarie University. [8] Wong, Imrie, and Xie, Problem-Based Learning Applied to a New Unit of Study on Programmable Logic Design, Proceedings of the Australasian Association for Engineering Education (AAEE),

236Innovations Required for Short-Range Retail Beam Power Transmission Girish Chowdhary and Narayanan Komerath Daniel Guggenheim School of Aerospace Engineering Georgia Institute of Technology, Atlanta, GA Abstract Retail beam power has many potential applications. This document focuses on short range (within 100 km) and low efficiency (less than 50%) beam power transfer applications. Such applications include connecting micro to medium-sized electronic devices with distributed power sources, power transmission to mobile consumers, and rapid restructuring of conventional wired network topology for damage mitigation. The innovations required to realize these applications are discussed. Keywords: retail power transfer, power distribution and supply, renewable microenergy 1. Introduction Modern society has come to rely heavily on devices that run on electricity. The traditional method of electrical power transmission consists of wired power networks that are widely implemented and well understood. While modern wired power grids are efficient at high capacity, they require significant, expensive, and relatively rigid physical infrastructure. This rigidity does not favor mobile computing or distributed power generation. Applications such as power supply to remote military and scientific outposts, disaster areas, miniature autonomous robots, and distribution to alien bases are best served by a flexible method of power transmission. Finally, there is a perceived need for rapidly reconfigurable networks for damage mitigation. Beam Power Transmission Systems (BPTS) offer the flexibility needed for power distribution. First demonstrated by Nikola Tesla in 1897, this method uses electromagnetic radiation. This paper discusses the feasibility of BPTS for emerging applications of short range (within 100 km), low efficiency (less than 50%). The document then points out the technological innovations and research directions necessary to realize these applications. We begin by describing the science of energy transfer by lightning. We then discuss the innovations required by outlining a spectrum of possible BPTS applications. The feasibility and profitability of these applications is then explored. 2. The Science of Beam Power Transfer Wired power transfer has several disadvantages: 1. Extensive infrastructure for power transmission, including wires, poles, landmass (to locate the poles), and transformers. Significant efforts and resources are required to set up the infrastructure, and once established, it is difficult to make changes to the network topology. Therefore, this method of power transfer can be considered rigid. 3. Large clear trail through forests and mountains. 4. Vulnerable to attacks, accidents and natural disasters. Reliance on a rigid infrastructure inhibits restructuring of the network topology to mitigate damage. 6. High maintenance costs, including remote locations. 5. Inhibits the development of the exploitation of micro renewable energy resources. Beam (wireless) energy transmission uses electromagnetic radiation (microwave or laser) for energy transfer [2], [3], [11]. BPTS is not based on a rigid infrastructure of cables and can therefore bring great flexibility to power transmission. Wireless power transmission was first demonstrated in 1897 by Nikola Tesla using radio frequencies and using microwaves in 1964 by William Brown [2]. NASA extended BPTS to tens of kilowatts in In the 1980s, beams up to 1 GW were considered under the Strategic Defense Initiative. BPTS have also been explored to bring solar-generated power in space to Earth both in the US and abroad (e.g. [3], [4], [6], [10] ), especially in Japan [13]. However, BPTS has not received much attention for conventional power distribution. On the other hand, there has been a revolution in the use of wireless transmission of information in the last two decades. Satellite television, mobile phones and wireless Internet connections reach billions of customers. Research to develop efficient low-intensity information transfer over long distances has received a major boost with the advent of high-frequency digital transmission and reception, resulting in devices that require very little power to operate. Therefore, there are billions of low-power devices in operation every day, which have the ability to rapidly decode the information contained in electromagnetic waves. Wireless Power Transfer (or BPTS) uses the same basic science as Wireless Information Transfer. As shown in Figure 1, direct current is converted to power in the microwave or millimeter wave regime, with efficiencies of 70-90% [2]. The beamforming process can be performed with efficiencies from 70 to 97%, but transmit and receive efficiencies vary widely. Ref [2] cites efficiencies as low as 5% to as high as 95%. The final stage of converting microwave energy to direct current using rectennas has an efficiency range of approximately 85-92%. Brown [2] cites an overall efficiency of 52% achieved in DC to DC power transmission using microwave beams in laboratories using the 200 standard

237equipment. Brown also claims that this efficiency could be raised to around 76% using specifically designed components. It should be noted that Brown's results refer mainly to power transmission in the GHz range. The classical relationship between the free-space-to-space transmission efficiency and an efficiency parameter: where, and represents the area of ​​the aperture transmit and receive respectively, denotes the wavelength of the signal and denotes the separation distance between the two apertures. Therefore, as the wavelength decreases, moving to higher frequencies, the aperture areas required for a given efficiency can be reduced. This consideration offers substantial system improvements if the conversion between DC and millimeter waves, especially in the atmospheric transmission windows around 140 and 220 GHz, can be made efficient. DC to waveforming, transmitting, receiving emitted power Wave to DC Figure 1: Schematic of a DC to DC power transmission system 3. Feasibility of low-range, low-efficiency BPTS In the past, researchers have focused heavily on in the use of BPTS as an enabling technology in Space Solar Power Systems (SPS) (see, for example, [2], [3], [4], [10], [13]). These systems are characterized by low efficiency power transmission from orbital distances (around 400 to 800 km). With the notable exception of [19], little attention has been paid to low intensity, short range (in hundreds of km) power transmission using BPTS to power devices that require low to medium range power. This potential area of ​​application of BPTS has enormous potential considering the ubiquity of such devices. Power delivery in this context is characterized by low intensity, shorter range, wide coverage, consistency, and efficiency can be traded for convenience and coverage. We will now use the Friis transmission equation to show that in this context BPTS can produce feasible designs. Considering the ideal conditions for the transmission of microwave beams, the Friis transmission equation is: 2. where, P t is the transmitted power, is the received power, are the gains of the transmitting and 1. receiving antenna respectively , y is the distance over which power is transferred. Assuming a conservative energy efficiency of 20% (i.e. P r / P t = 0.2), the variation of BPTS system range with system frequency is shown in Figure 2. Range in km BPTS Range (km) vs System Frequency (GHz) Antenna diameter= 20m, 20m Antenna diameter= 20m, 5m Antenna diameter= 20m, 1m Antenna diameter= 30m, 5m frequency in GHz Figure 2 BPTS range vs. System frequency with different antenna diameters The figure shows that for different combinations of antenna diameters the range varies linearly with the transmission frequency. Figure 3 shows the variation in antenna diameter with different frequency ranges from 2 GHz to 200 GHz. It can be seen that at lower frequencies (near 2 GHz) larger antennas are required to achieve good transmission ranges. This graph, combined with the fact that the 2 GHz to 2.5 GHz frequency band is heavily used by wireless LANs and other electronic devices, suggests that optimal frequencies for BPTS could be between 50 GHz and 300 GHz. Diameter antenna diameter, m Antenna diameter (m) vs. system frequency (GHz) range = 1 km range = 5 km range = 30 km range = 50 km range = 100 km System frequency in GHz Figure 3 Diameter of the receiver antenna versus the transmit frequency for different system ranges. Figure 4 shows the variation in energy efficiency as a function of system frequency for a 20 m and 5 m transmit/receive antenna combination. The graph indicates that higher efficiencies can be achieved at higher transmission frequencies. Hence the analysis using the Friis 201

238The equation strongly suggests the use of high transmission frequency for BPTS. However, these results should be viewed with caution as they do not take into account the effect of atmospheric attenuation. In particular, it is known that for frequencies higher than 10 GHz the attenuation due to rain is about 10dB/km, while that due to water vapor is higher [1]. In addition, in urban environments the effects of multipath must also be taken into account. Finally, we note that Ball has raised concerns about the accuracy of atmospheric attenuation values ​​in different frequency ranges [11]. Ball's comments suggest that the values ​​may be overly conservative, particularly for vertical transmission where atmospheric density is no longer constant. In summary, even with low efficiency, the Friis equation indicates that a feasible BPTS system can be designed with reasonable antenna dimensions that allow short-range wireless power transmission. BPTS Power Efficiency: Power efficiency vs. system transmission frequency for different system ranges, assuming a transmit antenna diameter of 20m and a receive antenna diameter of 5m. This analysis supports the feasibility of BPTS intended to deliver small amounts of power to micro to medium sized devices to perform valuable functions at short ranges. In the simplest architecture, the power will be distributed over a significant area, within which all devices can accept the power that falls on their collectors, with the rest being wasted. This will result in a significant loss of efficiency, however if the energy used is from micro renewable sources this becomes less of a concern. Such a BPTS can augment the established wired network to meet requirements that are currently not met: 1. Quickly provide power to remote consumers without having to set up expensive wired architecture or transporting power generation capabilities. 2. Manage distributed power generation and consumption seamlessly. This includes connecting mobile or distributed power sources to the main grid and connecting mobile power consumers to distributed sources without the need for an elaborate cabling infrastructure. 3. Quickly restructure the network topology to enable fast and efficient mitigation of partial network failures. 4. Potential applications In this section we discuss some potential applications of BPTS. This discussion serves to give a perspective of what is required in terms of technical advances and to point out the technical innovations required. 1. Wide Area Low Intensity Power Distribution: Small scale BPTS can be used in a shopping mall or cafeteria type environment to allow enabled devices to charge automatically. Due to the proximity of the power source and the low power requirements of modern portable electronics, the intensity of the transmitted power need not be high. 2. Rapid power supply to remote military or scientific outposts: BPTS can be used to deliver targeted power to military or scientific outposts operating in remote regions. 3. Heavy Duty Miniature Robots: Battery power is one of the limiting factors in the design of miniature robotic vehicles such as Miniature Unmanned Aerial Systems (M-UAS). These vehicles cannot carry a significant amount of power on board. Power delivery using trailing cables has been attempted previously [14]; however, this method can be impractical and limiting due to the required infrastructure. BPTS have the potential to revolutionize the capabilities provided by M-UAS by using beam power transmission to increase their endurance. This capability may enable exciting new applications, including exploring indoor stadiums with a team of networked M-UASs operating in close proximity to a mothership that harvests power locally and transmits power wirelessly to the receiving M-UAS. 4. Distributed Power Generation: There is a strong drive in the market towards the use of smaller self-sufficient units that generate enough electricity for local purposes. This paradigm is called distributed power generation. It is postulated that distributed power generation will not only be able to meet our ever-increasing energy needs, but has the potential to be extremely cost-effective and sustainable, as it is primarily based on the exploitation of renewable resources [14]. The power distribution systems that support distributed power generation must remain flexible and highly adaptable. BPTS-based distribution systems can provide this flexibility as they are not tied to a rigid ground infrastructure. 5. Remote Area Scouting: Transmitted power can be used to increase the distance a single unit can cover while scouting remote areas through directed power delivery. On alien bases like the moon, transmitted power is an extremely attractive option as the cost of transporting cables and other network-related components can be formidable. Also, where there is no atmosphere (like the Moon) ideal efficiencies can be achieved. These benefits make the use of energy emitted from lunar bases an extremely attractive option. Reference [16] suggests the use of mobile power generation plants on the lunar surface to ensure continuous solar power 202

239supply throughout the lunar night and day. BPTS systems have a clear advantage over conventional cable power transmission systems in this case. Distributed energy sources Charging of mobile electric vehicles BPTS transceiver Power supply to UAS Wired network Remote consumers Figure 5: Possible applications of BPTS in short distances (tens of kilometers) 6. Increasing the autonomy of electric vehicles: Current electric vehicles they have a limited autonomy due to weight and volume of the battery. Enabling metered power delivery on roads or in parking lots can go a long way in increasing the range of electric cars and can provide new sources of revenue. By securing vehicles, or by only delivering power when the vehicle is parked and unoccupied, safety concerns can be eliminated. 7. Rapid restructuring of network topology for damage mitigation: Augmenting the wired power grid with BPTS transceivers can mitigate partial damage in conventional wired networks by enabling rapid diversion of power to avoid damaged areas of the network and unable to transmit. Figure 5 shows a scheme of the possible flexibility in the applications offered by BPTS. The figure visualizes the distribution of power generated from distributed power sources and the conventional wired network to remote consumers, flying vehicles, and electric vehicles. 5. Profitability of BPTS for potential applications The current cost per kilometer for cable power transmission is around USD 1 million (2010) (see for example [20],[21] for cost per mile). This cost includes materials, location (land cost) and environmental costs. Traditionally it is calculated by dividing the total cost of the project by the number of kilometers of cables used. Since traditional transmission projects span hundreds (if not thousands) of kilometers, adjustments must be made for the savings gained due to large-scale production and operation. For this reason, this cost could be misleading over short distances. Therefore, we propose the following model to capture the effective cost of power transmission by cable: 3. In the above equation, denotes the distance over which power will be transmitted, the constant captures the cost per kilometer, and the exponential term represents the effect of savings made over long distances. Clearly, for a die, if it is large, the effect of the exponential term is negligible. On the other hand, the infrastructure cost for BPTS over short distances is significantly lower than cable transmission systems, since location and material costs are significantly reduced. Assuming that a BPTS deployment of more than 1 km is equivalent to a 10-pole cable deployment, one way to approximate the BPTS cost per kilometer is to divide the cost per kilometer of cable transmission by 10. With this assumption, the BPTS cost can approximate around USD 100,000 per km. However, this cost does not take into account the reduced efficiency of power transmission over long distances. The power transmission efficiency ( ), modeled by the Friis equation (equation 2), is inversely proportional to the distance. The following model can be used to capture the effective cost of BPTS systems: In the above equation, for a scaling constant, the last term captures the losses resulting from loss of efficiency over long distances. Figure 6 shows the effective cost of wire and beam power transmission for various frequencies. In that graph the diameters of the receiving and transmitting antennas are established, the antenna gains are approximated using the equation, where 4. is the wavelength and is a factor of transmission efficiency. From Figure 5 it can be seen that at a distance of 1 km, BPTS are competitive with wired power systems. The figure also shows that the cost of BPTS increases significantly over cable transmission as the transmission distance increases, and that the cost is inversely proportional to the transmission frequency. Particularly with a transmission frequency of around 20 GHz, BPTS can compete with cable transmission for distances of up to 6 km. Numerical values ​​will largely depend on the actual design of the BPTS; however, the trends should remain the same. The preliminary analysis above indicates that for applications that require power transmission over short distances (within 100 km), the BPTS can compete with cable power transmission in terms of cost effectiveness. There are a number of apps that fall into this category, including the ones mentioned in the previous section. On the other hand, when efficiency over long distances is considered, BPTS are ineffective. Transmission to major 203

240Frequencies, improved antenna design, and a better understanding of the health effects of high-frequency transmission are required to make the case for BPTS over greater distances. Effective cost of power transmission in thousands of wired bpts ω = 8 GHz bpts ω = 10 GHz bpts ω = 20 GHz Range in km Figure 6 Effective cost of wired and transmitted power transmission for various frequencies. 6. Technological Innovations Required In this section we provide a high-level overview of the technological innovations required to realize some of the potential applications discussed in the previous section. a) Efficient frequency conversion: A major hurdle in BPTS implementation is the inefficiency in converting microwave frequencies to typical Hz operating frequency voltages, and vice versa. Advances in Optical Rectennas that couple and rectify optical frequencies to DC provide encouraging news for transmitted power. This direction of research should be followed. b) Advances in antennas: antennas that work efficiently at higher frequencies are very important for the emitted power. It will be necessary to develop low-cost antennas capable of transmitting at higher frequencies (20 to 200 GHz) with narrow frequency bands. These antennas must be designed in the form of phased arrays, to allow real-time control to change the direction of the beam at high frequencies without physical actuation. c) Advances in Electronics: Passive and active electronic circuits operating at frequencies on the order of 90 and 250 GHz will require significant advances in nanoscale fabrication technologies and RF engineering. d) Direct Conversion of Broadband Sunlight to Narrowband Wave: The implications of solving this innovative technological concept are enormous from a Retail Beamed Power perspective. This technology eliminates the need to convert sunlight to DC and thus efficiency is significantly improved not only for Space Solar Power (SPS, an application of BPTS that has been extensively studied, see for example [3], [4], [10], [13]), but in general for Beamed Power. e) Innovations in radiation monitoring below 200 GHz f) Brown in [2] mentions that the studies conducted by DOE/NASA have not found any major issues hindering the deployment of BPTS, including environmental and biological considerations ( Brown refers to [18]). Similar studies on the 200 GHz regime have not yet been published. g) Decentralized network management through network control: One of the main capabilities offered by the use of transmitted power is network flexibility and the ability to support distributed power generation. Advances are required in the decentralized management of network structures. This includes thinking of each transceiver as an autonomous agent that must use locally available information to work synergistically with other networked transceivers to meet globally defined needs. The emerging field of decentralized control of networked systems (see for example [17]) promises the development of tools that will be essential to guarantee it. Some areas where technological advances are needed are: 1. Real-time decentralized fault detection in networks 2. Real-time efficient restructuring of the network topology to ensure uninterrupted power delivery by bypassing non-functional units. 3. Decentralized network voltage regulation. h) Graph-based models of decentralized networks: A power distribution system consists of independent power generation and consumption nodes that are connected through some type of network. Such systems are very well represented through the framework of graph theory. Graph theory has excellent tools that can be used to model power distribution systems. For example, the notion of strong connectivity from a graph can be used to determine if a network can distribute power among all of its nodes. The research to bring the tools of graph theory to the modeling of BPTS-equipped decentralized networks will be invaluable. 7. Conclusions In this paper, we describe a number of innovations needed to take Beam Power Transmission (BPTS) from concept to reality. We note that BPTS is an established concept that can bring immense flexibility to power generation and distribution. We point out several possible applications for BPTS ranging from power supply to remote outposts to power supply to mobile units. In addition, the feasibility of BPTS for these applications and its profitability were also analyzed. We conclude that necessary technological innovations include improved antenna design, efficient frequency conversion, broadband sunlight to narrowband wave conversion, innovations in radiation monitoring, and control theoretical approaches in decentralized network management. This list is by no means exhaustive. We strongly believe that current advances in electronics, control theory, and antenna design point in the right direction to undertake these innovations. 204

241REFERENCES [1]. Girish Chowdhary, Rajeev Gadre, Narayanan Komerath, Policy Issues for Retail Beamed Power Transmission, Proceedings of the Atlanta Conference on Science and Innovation Policy, October [2]. William C. Brown, E. Eugene Eves, Beamed Microwave Power Transmission and its Application to Space, IEEE Transactions on Microwave Theory and Techniques, Vol. 40, no. 6 of June [3]. N.M.,Komerath, N. Boechler, S.S. Wanis, Space Power Grid: Evolutionary Approach to Space Solar Power, Proceedings of the 2006 ASCE Earth and Space Conference, League City, Texas, April 2006. [4]. N. Komerath, V. Venkat, J. Fernandez, Near-millimeter wave problems for a space power grid. Proceedings of the SPESIF Conference, American Institute of Physics, March [5]. Then. USA- Climate Change Technology Program [6]. Goswami et al, New and emerging developments in solar energy; Solar Energy, Volume 76, Numbers 1-3, January-March 2004, pages [7]. Web: last visited October [8]. Web: p/, last consulted October [9]. Web: age.jsp?itemid= Microwave and radiofrequency radiation by CWA, last visit in October [10]. Peter Koert, James T. Cha, Millimeter Wave Technology for Space Power Transmission, IEEE Transactions on Microwave Theory and Techniques, Vol. 40. No. 6, June [11]. John A Ball, On Atmospheric Attenuation, Haystack Laboratories, MA, December web link ( ide/atmatt.pdf) [12]. Constantine Balanis Antenna Theory: Analysis and Design, 3rd Edition, John Wiley and Sons, USA, [13] .Masahiro Mori, Hideshi Kagawa, Yuka Saito Japan Aerospace Exploration Agency (JAXA) Summary of Studies on Space Solar Power Systems, Acta Astronautica, Volume 59, Numbers 1-5, pp [14].Samuel A. Johnson, Justin M. Vallely, A Portable Aerial Surveillance Robot, Proceedings of SPIE, The International Society for Optical Engineering, Vol 6201, [15].V.V. Vaitheeswaran, Power to the People, Farrar, Straus and Giroux, New York, ISBN [16].Paul D. Lowman Jr, Moonbase Mons Malapert, Aerospace America, AIAA Oct [17].Magnus Egerstedt, Mehran Mesbahi, Graph Theoretic Methods in Multiagent Networks, Princeton Univ Pr, [18]. Anon. final process. Solar Power Satellite Program Review, Conf. Report, Lincoln, NE, DOE/NASA Satellite Power System Concept Development and Evaluation Program, April [19].Andre Kurs, Robert Moffatt, and Marin Soljacic, Simultaneous mid-range power transfer to multiple devices. Latvian physical app. Vol.96, , (2010). [20].Matthew H. Brown, Richard P. Sedano, Electricity Transmission-A Primer. Technical report to the National Council of Electricity Policy, June 2004 [21]. Anonymous, Canada-Northwest-California Transmission Options Study, Northwest Power Pool Northwest Transmission Assessment Committee NW-California Study Group, May

242Analysis of the influence of distributed generation on voltage dips in electrical networks Sigridt García-Martínez and Elisa Espinosa-Juárez Faculty of Electrical Engineering, Universidad Michoacana de San Nicolás de Hidalgo Morelia, Michoacán, 58060, México ABSTRACT This article presents an analysis of the influence of distributed generation generation on the frequency of occurrence of voltage dips in electrical networks. It shows how the characterization of the system with respect to voltage dips can be affected by the presence of distributed generation. The analysis is performed using the failure position method with a program that was implemented in Matlab. Several case studies are presented in the IEEE 57 and 118 bus test systems taking into account different percentages of distributed generation penetration. Keywords: Voltage Drops, Distributed Generation (DG), Fault Position Method, Power Quality. 1. INTRODUCTION In recent years, the issues of conservation of natural resources and environmental protection have gained great importance. Electric power generation is an area of ​​great concern due to environmental problems. For this reason, the use of green energy has been promoted in order to reduce gas emissions that cause global warming, in addition to having a rational use of electricity and thus contributing in some way to improving the quality of the environment. Some countries have incentive systems for green energy generation. Some countries have established goals on the percentage of electricity generation with renewable sources with respect to traditional generation sources; in the next ten to twenty years, these targets vary from 10% to more than 40% in some countries[1][2]. Accordingly, in recent years there has been a great boost to the development and use of different generation technologies related to the generation of renewable energy on a small scale, which has been called Distributed Generation. Distributed Generation (DG) is defined as the strategic use of modular power generation units, which are connected to electrical networks. In general, the power range is from 3 kW to 20 MW, however, some references indicate that the power range varies from 5 kW to 300 MW [3][4]. DG units are mainly installed to provide real power to the electrical grid, so their power factors are 1.0 or close to 1.0 [4]. There are several advantages when installing small generation units; these can be economic, environmental, technical (voltage support) and even political (competition). Although the costs associated with DG are still high, DG could be a solution for those situations where high reliability in power supply is needed [5]. However, the DG represents changes not in the network topology, but in the direction of flows and voltage levels [6]. Therefore, it is important to consider that the installation of DG units, in addition to the economic benefit, must guarantee the reliability, safety and quality of the power supply. In addition, it must be taken into account that the higher the DG introduced into a system, the greater the influence on the behavior of the power system [7]. In general, DG tends to help mitigate voltage sags because it increases the fault voltage level of the network buses and also helps maintain voltages during fault [3][5]. Likewise, the characteristics of voltage sags due to faults in electrical networks can be affected by the presence of DG; Several factors, such as the level of DG penetration, can change the behavior of the system with respect to voltage drops. Therefore, it is interesting to analyze the impact of DG on the probability of voltage sags on the system buses. Voltage dips are one of the main disturbances that affect power quality; they are also responsible for significant economic losses in the industry because they can cause many devices to malfunction. Voltage dips often cause complete pauses in the industrial process, pauses that further increase losses. For example, there are studies that estimate that in the European Union the losses in the industrial sector due to short interruptions and voltage dips were 85,000 million euros in 2007 [8]. Voltage dips are reductions in the rms value of the voltage, between 10% and 90% of the nominal voltage, with durations from 0.5 cycles to 1 minute [9], which are mainly produced by short circuits in the system. electric. Many of the equipment used in industries such as programmable logic controllers, computers, among others, are sensitive to voltage drops. Several studies have 206

243It has been carried out to understand, quantify and reduce the impact of voltage drops in electrical networks. There are various methods for predicting and analyzing voltage sags; the number of voltage sags expected at a particular location in a system can be estimated through monitoring or through stochastic methods [10]. Through monitoring, information about the magnitudes and duration of brownouts for particular system buses can be obtained. However, a very long follow-up period may be necessary to obtain accurate results in the characterization of the system [10]. Stochastic processes based on the network model and statistical fault data are widely used to predict voltage drops on a bus of interest. Stochastic forecasting is a very useful tool and can be used in parallel with monitoring [10]. One of the most widely used stochastic methods for estimating voltage sags is the Fault Positions Method [11]-[13]. Many authors use this methodology to analyze voltage sags [12]-[15]; With this method, the expected number of voltage sags in any bus of the electrical system under study can be obtained. In this paper, the impact of voltage dips in an electrical network is analyzed taking into account different levels of DG penetration using the Fault Position Method. In section I a brief introduction on voltage dips and some characteristics of DG is presented. Section II describes the Failure Position Method and its implementation. In section III, the case studies are presented and an analysis of the impact of DG on electrical networks is carried out. Finally, section IV presents the conclusions of the analysis. 2. IMPLEMENTATION OF THE FAULT POSITION METHOD The Fault Positions Method allows a stochastic prediction of the expected number of voltage sags in any bus of the electrical system under study [13]. The method consists of selecting fictitious fault positions in bars and lines of the network, and then it is possible to obtain the characteristics of the voltage sags. Each preselected failure position is assigned a certain failure probability value, taking into account statistical data on failures in the electrical system. Through conventional techniques for the calculation of short circuits, the magnitudes of the voltage sags and the frequency of occurrence are obtained by combining these results with historical data [10][16]. It is important to note that when more failure positions are used in this method, the results improve but the computation time increases. In this work, the Fault Positions Method was implemented in MatLab, modifying the admittance matrix of the systems in their original state, inserting fictitious buses that correspond to the preselected positions; then an additional procedure was applied for the calculation of failures. Prefault stresses were considered by performing power flow studies using the PSS/E software [17]. 3. CASE STUDIES In order to analyze the influence of DG on voltage dips in electrical networks, studies have been carried out using the Fault Position Method in IEEE 57-bar and 118-bar test systems. The number of failure positions for the case studies was selected taking into account the precision of the numerical calculations and the aspect of the computational requirements [19]. A. Studies on the IEEE 57-bus test system The IEEE 57-bus test system consists of 57 buses that are interconnected by means of 63 lines, 15 transformers, and 7 generating units [18]. In this case, 50 failure positions were considered in each line. Voltage dips were also considered to be caused by balanced three-phase faults. A failure rate of 0.50 failures/year has been assumed for all the lines of the system [11][14][19], and the failure rate/year on the buses has been assumed to be negligible. For all the study cases, the magnitudes and angles of the prefault stresses were obtained by performing a power flow study using the PSS/E software, as mentioned above. For the inclusion of DG in the system, small power synchronous generators were considered. These generators were connected to charging buses of the electrical system. The following case studies were analysed: A.1. Base case, the original conditions of the system are considered according to [18]. A.2 A 10% increase in power with GD was considered. A.3 An increase of 20% power with GD was considered. A.4 50% of the original generation was replaced by GD- A.5 21% of the original generation was replaced by GD. In case A.1, the system is analyzed in its original conditions taking into account the pre-fault stresses that were obtained through a power flow study. To analyze the influence of the GD on the expected number of voltage sags in the system bars, in cases A.2 and A.3 the GD is included in different system bars and different penetration levels have been assumed (10 % and 20%). with respect to the total real power of the system). The number of voltage dips is calculated using the Fault Positions Method. It is important to mention that small generating units (GD) were placed on randomly selected load buses. Fig. 1 shows the influence of the GD penetration level on the frequency of appearance of voltage sags. The generating units were connected in the bars 23, 28, 31, 35, 43, 44 and 52. Fig. 1 shows the voltage sags in each bus of the system, when a residual voltage threshold of 0 is reached. .7 p.u. is considered for cases A.1, A.2 and A.3, respectively. 207

244Voltage dips Voltage dips Voltage dips Voltage dips Voltage dips Proceedings of the 3rd Multiple International Conference on Engineering and Technological Innovation (IMETI 2010) The results indicate that the increase of DG in a power network leads to a reduction of drops voltage on almost all buses Fig. 2 shows the results for a situation analogous to Fig. 1, but the generation units have been changed to 5, 17, 33, 38, 53, 54 and 56 buses. It can be seen that the voltage dips have decreased again; In addition, it can be observed that there is a greater variation in those buses where a generation unit was connected or that are very close to one. Fig. 3 and Fig. 4 present results under the same conditions as Fig. 1 and Fig. 2, respectively, but taking into account a residual voltage threshold of 0.8 p.u. In both figures, Fig. 3 and Fig. 4, the voltage dips presented in each bar decreased slightly with respect to the base case, A.1. Fig. 5 shows voltage dip graphs obtained considering a voltage threshold of 0.9 p.u. It is observed that the GD does not contribute to reduce the voltage dips in most of the buses. Only buses 1, 2 and 17 present a slight decrease in voltage dips/year; for example, bus 2 has voltage dips/year in the base case, while in case A.3 where power was increased by 20% with GD, the voltage dips for the same bus are voltage dips/year . Table 1 and Table 2 show voltage sags on the buses with the greatest variation considering a residual voltage threshold of 0.7 and 0.8 p.u., respectively. It can be seen that for a threshold of 0.7 p.u. the differences in voltage dips between the three study cases do not exceed three voltage dips/year, while for the threshold of 0.9 p.u. variations do not exceed two voltage dips/year Buses Fig. 3: Voltage dips considering a 0.8 p.u. voltage threshold for cases A.1, A.2 and A3. DG at bars 23, 28, 31, 35, 43, 44 and Buses Fig. 4: Voltage drops considering a 0.8 p.u. voltage threshold for cases A.1, A.2 and A3. DG on buses 5, 17, 33, 38, 53, 54 and 56. A.1 A.2 A.3 A.1 A.2 A A.1 A.2 A A.1 A.2 A Buses Fig 1: Voltage drops considering 0.7 p.u. voltage threshold for cases A.1, A.2 and A3. GD at bars 23, 28, 31, 35, 43, 44 and Buses Fig. 2: Voltage drops considering a 0.7 p.u. voltage threshold for cases A.1, A.2 and A3. DG on bars 5, 17, 33, 38, 53, 54 and 56. A.2 A.2 Buses A Fig. 5: Voltage drops considering a 0.9 p.u. voltage threshold for cases A.1, A.2 and A3. DG in bars 5, 17, 33, 38, 53, 54 and 56. TABLE 1: BUSES WITH GREATER VARIATION OF VOLTAGE DROP BY DG, CONSIDERING A 0.7 P.U. VOLTAGE THRESHOLD Busbars Voltage drops/year A.1 A.2 A

245Sags Sags Sags Sags Sags Sags Proceedings of the 3rd International Multi-Conference on Engineering and Technological Innovation (IMETI 2010) 35 TABLE 2: BUSES WITH THE GREATEST VARIATION OF SAGES PER DG, CONSIDERING A 0.8 P.U. VOLTAGE THRESHOLD Voltage dips/year Buses A.1 A.2 A In A.4 and A.5, two of the original generation units, connected to buses 12 and 1 respectively, have been replaced by small generation units dispersed in the electrical network that represent a DG penetration of the order of 50% for the A.4 case and 21% for the A.5 case. Fig. 6 and Fig. 8 present the voltage sag results for case A.4, in which the generating unit at bus 12 was replaced by DG using small units at 20 bus in the system. It can be clearly observed that the voltage dips in the buses decrease for all the voltage thresholds considered, however, the variation is significantly less for the voltage threshold of 0.9 p.u. Similarly, Fig. 9 and Fig. 10 present voltage dip results for case A.5, in which 21% of the original generation is replaced by GD. From Fig. 6 and Fig. 9 it can be seen that for a threshold voltage of 0.7 p.u. when the penetration level is lower (around 21% in case A.5), the voltage sags/year decrease to a lesser extent in all the bars with respect to the base case. A similar situation is observed in Fig. 7 and Fig. 10, corresponding to a voltage threshold of 0.8 p.u. A similar behavior occurs for the voltage threshold of 0.9 p.u., in case A.5 most of the bars do not show changes. Busbars Fig. 7: Voltage drops considering a voltage of 0.8 p.u. voltage threshold for cases A.1 and Buses A Fig. 8: Voltage drops considering a value of 0.9 p.u. voltage threshold for cases A.1 and Buses A Fig. 9: Voltage drops considering a value of 0.7 p.u. voltage threshold for the cases: A.1 and A.5. A.1 A:4 A.1 A.5 A.1 A A.1 A Buses Fig. 6: Voltage drops considering a 0.7 p.u. Voltage threshold for cases A.1 and A A.1 A Busbars Fig. 10: Voltage drops considering a 0.8 p.u. voltage threshold for cases: A.1 and A

246Voltage drops Voltage drops Voltage drops Proceedings of the 3rd International Multiconference on Engineering and Technological Innovation (IMETI 2010) In Table 3, the average estimate of voltage drops/year for all system buses considering voltage thresholds of 0.7, 0.8 and 0.9 p.u. are shown for the different cases analyzed. As can be seen in cases A.2 and A.3, the average number of voltage sags per year decreases when there is DG in the system under study. Furthermore, when GD is introduced without changes in the total power generation (case A.4 and A.5), the number of voltage dips decreases. TABLE 3: AVERAGE VOLTAGE SURGES FOR THE IEEE 57-BUS TEST SYSTEM Range of Average Sags/Year Sags (p.u.) A.1 A.2 A.3 A.4 A.5 0.0v v v B Studies at IEEE 118-bus test system The IEEE 118-bus test system consists of 118 nodes interconnected by 177 lines and 54 thermal units. The data system is provided in [20]. In this case, 15 failure positions were considered in each line. Faults that cause voltage dips were also considered to be single-phase faults. A failure rate of 2.58 failures/year has been assumed for all the lines of the system; this value has been obtained considering a random number between 0 and 5 of faults in each line; the total number of failures is then divided by the total number of lines to obtain the failure rate value [19]. The bus failure rate/year has been assumed to be negligible. For the inclusion of DG in the system, small power synchronous generators were considered to load buses of the electrical system. The following case studies were analysed: B.1. Base case, the system is in original conditions according to [18]. B.2 An increase of 20% power with GD was considered. B.3 23% of the original generation was replaced with DG In case B.1, the system is analyzed in its original conditions. In case B.2, the influence of DG on the occurrence and propagation of voltage dips is taken into account, when said generation is included in different system buses and when a penetration level has been assumed (20% with respect to the total real power of the system). In case B.3, around 23% of the total generation was replaced by DG. Fig. 11 and Fig. 13 show the influence of the GD penetration level on the frequency of occurrence of voltage sags when a residual voltage threshold of 0.7 p.u., 0.8 p.u. and 0.9 p.u. are considered for cases B.1, B.2 and B.3. The results indicate that an increase in GD in an electrical network leads to a reduction in voltage dips on almost all buses Buses Fig. 11: Voltage dips considering a 0.7 p.u. voltage threshold for cases B.1, B2 and B.3. 5 B.1 B.2 B Buses Fig. 12: Voltage drops considering a 0.8 p.u. voltage threshold for cases B.1, B2 and Busbars B Fig. 13: Voltage drops considering a 0.9 p.u. voltage threshold for cases B.1, B2 and B.3. The changes of the voltage dips (in percentage) corresponding to the thresholds 0.7, 0.8 and 0.9, of cases B.2 and B.3 with respect to the base case, are shown in Fig. 14 -Fig. 16. Note that the largest difference occurs when 23% of the original power is replaced by GD. For example, from Fig. 16, on bus 44, the voltage drops decrease by approximately 60%. However, the results show that some buses experienced more subsidence due to the location of DG. For example, from Fig. 16, at bus 80 the voltage sags increase around 80%, this is because the generator that was connected at this bus was replaced by small units that were dispersed in the network. As a consequence, bus 80 does not have the same voltage support. B.1 B.2 B.3 B.1 B.2 B.3 210

247Variation (%) Variation (%) Variation (%) Proceedings of the 3rd International Multi-Conference on Engineering and Technological Innovation (IMETI 2010) Buses Fig. 14: Voltage dip variation considering a 0.7 p.u. voltage threshold for B.1 vs. B2 and B.1 vs. B.3. Fig. 15: Variation of the voltage dip considering a 0.8 p.u. voltage threshold for B.1 vs. B2 and B.1 vs. B.3. Fig. 16: Variation of the voltage dip considering a value of 0.9 p.u. voltage threshold for B.1 vs B2 and B.1 vs B CONCLUSIONS B.1 vs B.3 B.1 vs B.2 B.1 vs B.2 B.1 vs B Buses B.1 vs B. 2 Buses B.1 vs B The increasing level of DG penetration makes it necessary to study the impact of DG on the performance of electrical systems. This article presents an analysis of the influence of DG on voltage dips in electrical networks. The failure position method has been implemented and applied to a 57-bar and 118-bar test system. With these examples, it is shown that, in general, DG helps to mitigate voltage sags. Of all the cases analyzed, the results presented have an important variation in the number of voltage dips for 0.7, 0.8 and 0.9 p.u. thresholds In addition, from the analyzed case studies, having DG with total power equivalent to the base case, the voltage sags decreased. However, some buses in the system have increased voltage drops because they do not have the same voltage support when a nearby generating unit is replaced with DG. 5. REFERENCES [1] European Energy Portal, Renewables. Brussels, Belgium (April 2009). [Online]. Available from: [2] National Renewable Energy Laboratory, US Department of Energy (July 2008). 20%Wind power by increasing the contribution of wind power to the US electricity supply [Online]. Available at: [3] J. V. Milanovic, H. Ali, M. T. Aung, Influence of distributed wind generation and load composition on voltage drops, presented at the International Conference on Power System Transients (IPST 07), Lyon , France, [4] A. S. Yilmaz , E. Yanikoglu, Behavior of integrated generation during voltage dips in distribution networks, Scientific Research and Essay, vol. 4, pp , March 2009 [5] J. A. Martínez-Velasco, J. Martin-Arnedo, EMTP Model for the analysis of the impact of distributed generation on voltage dips, IET Gener. Transm. Distribu., vol. 1, no. 1, pp, January [6] J. A. Martínez-Velasco, J. Martin-Arnedo, Distributed generation impact on voltage sags in distribution networks, 9th International Conference, Electrical Power Quality and Utilisation, Barcelona, ​​​​9-11, October [7 ] J. G. Slootweg, W. L. Kling, Impacts of distributed generation on the transient stability of the power system, Proc. IEEE Power Engineering Society Summer Meeting, 2002, Vol. 2, Jul 2002, pp [8] J. Manson, R. Targosz, European Power Quality Survey Report, Leonardo Energy Initiative, Nov Available at [9] IEEE Recommended Practice for Monitoring Electric Power Quality, IEEE Std , Nov 2, [10] M. H. J. Bollen, Understanding Power Quality Issues. Voltage Dips and Interruptions, IEEE PRESS Series on Electrical Engineering, [11] L. Conrad, K. Little, C. Grigg, Predicting and Preventing Problems Associated with Remote Fault Clearance Brownouts, IEEE Trans. on industrial applications, vol. 27, no. 1, pp., Jan/Feb [12] M. R. Qader, M. H. J. Bollen, R. N. Allan, Stochastic Prediction of Voltage Drops in a Large Transmission System, IEEE Trans. Application Ind., vol. 35, no. 1, pp., Jan/Feb [13] G. Olguin, M. H. J. Bollen, The fault position method for stochastic sag prediction: a case study, Proceedings of the 7th International Conference on Methods Probabilistic Applied to Power Systems, PMAPS 2002, vol. 2, pp [14] M. F. Alves, R. C. Fonseca, Stochastic estimation of voltage drop, Proc. Industry Applications Conference 2001, Vol. 3, September 30-October. 4, 2001, pp. [15] M. T. Aung, J. V. Milanovic, Stochastic prediction of voltage drops by considering the probability of failure of the protection system, IEEE Trans. On Power Delivery, Vol. 21, no. 1, pp Jan [16] E. Espinosa-Juarez, A. Hernández, Voltage Drop State Estimation: An Approach Based on the Concept of Fault Positions, Proceedings of the International Conference on Harmonics and Power Quality (ICHQP- 2006), Cascais, Portugal, October 1-5 [17] Power Systems Simulator for Engineering (PSS/E), Siemens Power Transmission & Distribution, Inc., Power Technologies International, USA [18] R. Christie, IEEE 57 Bust Test Case, College of Engineering, Electrical Engineering, University of Washington, August Available at: [19] E. Espinosa, A. Hernández, An analytical approach to the stochastic evaluation of balanced and unbalanced voltage drops in large systems , IEEE Trans. On Power Delivery, Vol. 21, no. 3, pp., Jul [20] R. Christie, IEEE 118 Bust Test Case, College of Engineering, Electrical Engineering, University of Washington, Aug. Available at: 211

248Use of DSS in an industrial context Authors: Juan Pablo Musella, Gustavo Janezic, Diego Branca, Daniela López De Luise, James Stuart Milne, Germán Ricchinni, Francisco Milano, Santiago Bosio University of Palermo, School of Engineering, Mario Bravo 1050, C1188AAB, Argentina, Abstract The purpose of this article is to present the management of socio-political aspects in the context of a decision support prototype, commonly known as DSS (Decision Support System), which was configured to be consulted by industrial companies. It consists of a base of the company's own parameters that directly affect the business, an entry of parameters associated with industrial sectors and macroeconomic factors that directly affect the activity. In its architecture there are two expert systems (one feeds the other) and a predefined set of data mining tools. The prototype presented here aims to provide decision makers with information to minimize the risks they face in a given business. The main contribution of this study is the inclusion of sociopolitical variables as part of the information entered in the Knowledge Base and how these are derived from existing data in the community. Keywords: Industry, Decisions, DSS, Expert System, Data Mining. 1.Introduction The DSS are specific information systems aimed at making organizational and business decisions. Its main objective is to answer questions about things that are difficult to assess, speculation about future events (generally where the context of the company is complex and changeable). They are commonly used for decision-making by middle and lower managers. They typically provide transactional and cross-departmental information. Data is generally collected from operational or transactional databases. One definition of DSS describes it as computer-based interactive information systems whose models describe and predict production processes. It is also defined as systems designed to help decision makers use data and models to identify and solve problems and make decisions. [1] DSS have four main characteristics: -they integrate data and models -the design is intended to assist managers in decision-making processes -they do not replace (but support) human decision-making -their purpose is to improve the effectiveness of the decisions and the efficiency of the same . In the late 1970s, the DSS was promoted through computers. According to Power [2] [3], they can process various types of input and provide an alternative decision. Therefore, they can be classified into four categories as follows: Communication Driven DSS – Allows simultaneous support for many users on a specific shared task. Eg: chats, instant messaging, collaborative systems, etc. Data-driven DSS: focused on accessing and manipulating time series for a given organization. They are usually used for specific queries in databases or storage of data that is mutable in time 1. In this category are geographic information systems, which can be used for the intelligent management of geographic data from maps. Document-Based DSS – Organize, retrieve, and manage unstructured information in a variety of electronic formats. They are intended for a large number of users. Its main purpose is to crawl web pages to find documents. As an example, there are various text analysis software, called Wikis (websites that volunteers can freely edit). Knowledge-Driven DSS: Provides experience accumulated from facts, rules, procedures, from the actual company, or imported similar structures. They are intended to be applied to problem solving. They are mainly used to advise management on certain products or services. As an example, the processing of large volumes of data, the recognition of hidden patterns and the intelligent representation of said discovered patterns. Model-Based DSS: Focused on accessing and managing statistical and financial models. It could be used for optimization or simulation. As a case is the prediction of business processes using information from past events to answer hypothetical questions. Some of the models described above exist on the market, as proprietary or open source solutions, as described in Table 3. 1 Database with specific content, generally oriented to a specific field (mainly for a certain company, organization or topic ) 212

249connected to each other with certain specific words that compel a special relationship between them. They are indicated in square brackets. In the figure, the question has two metavariables: parameter1 and parameter2. These two metavariables will eventually be replaced by specific variables (predefined by the system operator, of course). For example, parameter 1 could be linked to the total consumption variable of the screw and parameter 2 to the total production time of the screw. What happens with [parameter1] over [parameter2] Fig. 1. Example of a query with two meta-variables As can be seen, both commercial and open source systems have various fields of application. In any case, there is a strong tendency to do business with commercial DSS. This article describes the main architecture of the HECULES DSS prototype from the AIGroup research laboratory. Manages business-specific account transactional and operational information. It also has a statistical database of the business sector and a more macroeconomic database of the country to which it belongs. From the logical perspective, all the data stored represents a large set of variables classified into: business sector (sector for short), macro and microeconomic. The selected macroeconomic variables arise from the rules of macroeconomic models. Among them the GDP (Gross Domestic Product), the CPI (Consumer Price Index) and the occupancy rate. While the system constantly uses several variables, many others will depend on the specific query that the DSS answers. The sectoral variables focus mainly on the business activity, covering issues that range from issuances to the characteristics of competitors. Therefore, the set of real variables changes with the specific analysis that the DSS must perform. Among the variables for the microeconomic study, there is the balance information, since it provides the measure of required topics such as the business Economic Activity normalized by country or region. In HERCULES, the perspectives of decision makers are included as part of the search for solutions through the process of defining sets of variables. A good selection allows more or less solution alternatives even when some variables represent contradictory information. Experience also provides better definitions and, consequently, fine-tuned answers. From the industrial point of view, it could be said that HERCULES is a useful tool for a wide range of issues. To delimit the complexity of the implementation, the type of possible questions was analyzed, defined and restricted. They are classified into two main types: open and restricted queries. Since the questions are reusable for many studies, there are metavariables that act as linked wildcards (see Fig. 1). The questions are expressed as a set of metavariables. This document is then organized as follows: Architecture (Section II), Query and Interaction Management Approach (Section III), Economic Model and Specifications (Section IV), Socio-Economic Issues Management (Section V), Current Status of implementation/section VI), conclusion and future work (section VI). [4] [5 [6] 2. HERCULES Architecture The prototype has two user interfaces (see fig.1). The first for administrative tasks on the problem domain. It allows the configuration of the data universe and the import activities through a special module called Parser. The analyzer creates internal files for use by an expert system called the Internal Expert System (IES). These files are automatically processed using alternative data mining techniques, and the results are formatted using the user's GUI to display correctly. Parser goes through three steps: selecting files and variables, explicit relationships between selected files (using foreign keys and primary keys, also with record filtering based on a set of variable values), exporting data to a plain text file that you can be processed by any traditional data mining tool. The second user interface is intended to provide interaction with the business consultant and is directly connected to a second Expert System called User Expert System (UES) fed by the IES with user commands to display DSS results. UES is complemented with other minor modules to form the Advisory System. The user interface provides a formal way to enter variable declarations and request a specific output format. Figure 1 shows the connection between the configuration interface and the conversion and data processing. Builds one or more CSV files with processed data. Said files will be extracted from data for a set of tools as indicated by the IES. To do that, IES evaluates previously known best DM approaches for the subsets of data it recommends. In this way, the decision of what data is used and how it is processed rests with IES and its knowledge about the type of query and the problem. The results are recorded in the historical database for later use by the advisory system and for automatic adjustment of internal metrics. 213

250Regarding human-machine interaction, this prototype provides a sophisticated set of data capture and conversion tools integrated into the user interfaces. It is important to note that one of the critical functionalities of HERCULES is the ability to load information from a variety of sources (document files, flat files, worksheets, databases, etc.) into its internal database. and the possibility of successfully processing it. with a variable subset of tools. The user can define subsets of main variables for the current business problem and then import them considering standard (ISO/IEC set included) or custom units. The main menu (see Fig. 2) allows determining the type of study to be carried out, defining elemental or compound variables, configuring the connection to the Database and establishing MD restrictions. It also allows you to write a textual description of the test for future reference. For the creation of elementary variables (see Fig. 3 and 4) there is a set of fields to comply with in terms of domain, units, prohibited and allowed values. It is also possible to create groupings of variables by selecting one or more elementary variables (see Fig. 5). [7] [8] [9] Fig. 2. HERCULES main menu Administrative interface Data conversion Data manipulation DB ES Smart Updater Data mining technique 1 Data mining technique 2 Data mining technique N Data variables Fig. 3. Creation of variables User interface Advisory system Historical database Fig. 1. Simplified schematic diagram The data generator subsystem is another important module. Its main objective is the unification of the information for its subsequent DM. Generates some output files from all inputs. Figure 6 shows the three modules that implement the three main steps described above here: 1) Importer: Its main purpose is to import from each source to a common repository. Read a configuration file with the type of formats to process. You can perform a temporary transformation on the data during export, to keep the information in the temporary data file synchronized. 2) Processor and Filter – Merges input data into a unified database, joining, filtering, and projecting subtables as needed. Makes heavy use of dynamic SQL generated based on user choices and definitions. 3) The exporter creates a set of files that can be used for internal DM toolkits. 214

251Fig. 4. Variable configuration dialog It can be seen that in some cases the meta-variables are replaced with a specific variable (for example, sales rate in sample 1). There is also a special type of question such as case 3. It does not have an explicit metavariable but, to answer it, a task called expansion is required before processing. During expansion, the query is related and replaced by a set of expanded queries where there are no metavariables. Both the metavariables and the expansion procedure are parametric and can be defined by the system administrator. Although open questions are very flexible, there are many other questions that are frequently used and are therefore easy to predict because they fall within the classic question set. for such cases there is a set of restricted queries without any metavariables or variables. Table 2 shows the complete list designed for HERCULES. Some of them have been considered for future implementation. Fig. 5. Added variables definition dialog The functional prototype diagram is shown in figure 6. Filtering Importer Temporary DB Exporter Fig. 6. Data construction process 3. User query management Data mining file Due to man-machine complexity, a generic file and flexible query mechanism is required. It is acquired by using a set of metavariables, which make up a large number of predefined customizable queries that are referred to here as open queries. Metavariables can be assigned to specific variables during the query process. Table 1 shows some open consultations, classified by their coverage. 4. Economic models and specifications In general terms, models and specifications have been extracted for: macroeconomic, microeconomic and sociopolitical perspectives. This section will briefly describe the first two, and the next section focuses on sociopolitical concerns. Macroeconomics deals with economic aggregates, monetary expansion and recession, total goods and services, economic growth, the inflation rate, unemployment, the balance of payments, and the exchange rate. It also covers the growth of output and employment over time periods (since it reflects the growth of the economy) and the short-term variations that constitute the business cycle. [10] [11] [12] Despite the great contrast between macro and microeconomics, there is no conflict between them. Macroeconomics is just the aggregation of markets. The difference is mainly in the approach and presentation. The scientific study of how to define, explain, and anticipate economic phenomena using formalized tools to evaluate, model, and structure information is called Econometrics. Among the main projects in the area, it is worth highlighting the LINK project. Actually, it includes almost 80 models representing a total of 73 national economies and 7 regional aggregations. One of them is the Wharton-UAM model. Defines the study variables. [13] [14] [15] 215

252Some other tools and models are used in the field of macroeconomics, mainly related to the evaluation of the effects of the policy and some future projections. There are also certain regional models used by governmental public entities or not, to make projections. Among them are the XS21 family of models, EDGE (stochastic general equilibrium), ARIMA (autoregressive integrated moving average), REM (run-time performance monitoring), ARMA (autoregressive moving average) models, etc. [16] [17] [18 ] Macroeconomic variables are defined in HERCULES to cover various types of analyses. All of them feed an internal module that selects important variables and models for the resolution of the actual query. The variables typically selected are those that represent basic national equations, such as: GDP (Gross Domestic Product), C (Private consumption), G (Public consumption), I (Gross Domestic Investment, private or public), X (Goods and services export), M (Import of goods and services). Other equally important are the accounts related to the Balance of Payments, Monetary Aggregates, Price Variations and Commercial Exchange Rates. Because the aforementioned variables and their components are often used to extract time series, the HERCULES design includes them along with additional ones as they are related to the problem. It is important to note that the result is derived from a set of previously known rules that model knowledge about the content and use of the database in all similar cases. The data is updated periodically to improve accuracy. [19] [20] [21] Another complementary perspective is microeconomics. In this context, specific metallurgical companies that are listed companies (in Argentina Acindar, Siderar and Aluar). It was elicited because they are the most important today. The related information was extracted from reliable sources. Sales, production quantity, financial position statement, stocks are variables considered here. All this information is necessary to be able to carry out an in-depth analysis of the company. All these variables result in a large set of useful data to cross information, infer external changes and economic cycles. From the resulting analysis, a set of numerical and nominal values ​​is fed to an expert system. 5. Management of variables from the sociopolitical perspective The relationship between culture and sociopolitics has been studied by sociology since its first stage as a discipline derived from the industrial, economic, sociological and political changes of the last century [22][23][24 ] . The structures of the company reflect the consolidation of said relationship, making them relevant for the socioeconomic evaluation of a region, country or sector. There is a deep link between culture (values, beliefs, attitudes, norms, etc.) and socioeconomic development. (1) It has also been observed that the social issue is crucial in determining business performance [25]. Henceforth, relating economics to the social sciences is a subjective, relative, complex and comparative concept that becomes one of the recurring key themes of the social sciences. (2) Below is a brief description of the variables considered in this field: a) demographic variables: a preliminary study was carried out with classic demographic variables. They provide information about the regional context for the company. To reveal specific data for each case study, reference is made to well-known public sources. For example, each state has a statistics center that promulgates the official assessments. In several cases there are also private entities that are good alternatives. b) Socioeconomic variables: these types of variables are defined to address coverage issues. The first hurdle is providing fidelity and credibility to the data. Most of the studies and articles are written by sociologists and epidemiologists (3) (3). That skews the focus, changing approaches in a way that is not fully supported by an automatic processing software device. c) Sociopolitical variables: each variable is analyzed to break it down into several measurable indicators. A good part of these indicators are covered by macroeconomic, microeconomic and socioeconomic variables. This point of view also focuses on the governability and legal security of each region, considering precedents. In a general sense, variables are a demonstration of models built on reality. For example, the human development index has been expanded to cover several extra well-being topics. It can be measured as a collection of economic and sociological indices developed by the United Nations Development Program (UNDP)(4). In particular, the Human Development Index (HDI) is based on individual income, health (life expectancy), and culture (literacy and attendance rates of primary, secondary, and university students). The variables are collected, logged and loaded into the prototype. The data is organized in a set of tables and pre-processed according to the actual requirements so that it can be processed by DM. This module can also perform conversions to different formats and a set of precoded adaptations. Once prepared and converted, an exploratory analysis is performed and the resulting data is used to feed the IES to obtain the best interaction and behavior model. It is important to note that classical variables in this field typically lack a proven model to integrate them into an equation. 6. Current status of implementation 216

253The user interface for data entry and configuration variables are always implemented with adaptation modules, the database and the IES with some rules. Actually, the full set of rules is being implemented to model the initial DM results. Rules have been developed to model parts of macroeconomics, microeconomics, and sociopolitics. 7. Conclusion and future work The HERCULES prototype, its general design and main architectural features have been presented. It remains to study the self-tuning approach of internal working parameters and further open and restricted queries. The alternatives pending to be covered are the automatic decomposition of the problem in several subproblems, the complete adjustment of the expert systems and the GUI of results. 8. References [1]. E. Viglizzo. Ecological and economic sustainability of livestock. (1999). Argentine Symposium on Animal Productivity. [2]. J.C. Ascough, G.H. Dunn, G.S. McMaster, L.R. Ahuja, A.A. Andales. Producers, Decision Support Systems, and GPFARM: Lessons Learned from a Decade of Development (1997). SIMMODS [3]. INTA, National Institute of Agricultural Technology. (2009) URL: [4]. J. Ascough, M. Shaffer. D.Hoag, et al. GPFARM: An Integrated Decision Support System for Sustainable Great Plains Agriculture. (1998) 10th Meeting of the International Soil Conservation Organization. Purdue University [5]. V. A. Ferreira, D. G. De Coursey, B. Faber, L. Knapp, R. Woodmansee. Terra tools and techniques for ecosystem management. (1995) Congress. Res. serv. Report to Congress, Library of Congress, Washington, SPR: [6]. J. Milne, A. Sibbald. Modeling of grazing systems at the farm level. (1998) An. de Zootechnie, vol. 47: [7]. L. Silva, B. Revello. Construction of a decision-making support system for the management area of ​​the hospital de clínicas (2000). Computer Institute. Ingeniery school. University of the Republic. [8]. J. Watson, M. Rainer, Koh. Critical review of the EIS in the context of intelligence support. (1991) In Strategic intelligence management: techniques and technologies. Xu brand. ISBN[9]. J.Wetherbe, Executive Information Requirements: Getting It Right, (1991). MY Quarterly, pp. [10]. R. Dornbusch, S. Fischer. Macroeconomy. (2004) 5th edition. ed. Mc Graw Hill, pp. 3.ISBN [11]. P. Capros et al. Association for Applied Econometrics. (1991). Conf. in Int. Energy Market Modeling. France. [12]. A. Polished. Link Project. Madrid coll. School of Economics. [13]. S. Martínez Vicente, M. Blanco Losada, E. López Díaz-Delgado. Regional Planning Model: the XS21 family of models (X-XXI Century). (2009). URL: [14].G. Janezic, D. Branca, D. López De Luise, J. Azcurra, J. Musella, F Milano, S. Bosio. Decision Support System in Industrial Contexts. I PERUVIAN CONGRESS OF OPERATIONS AND SYSTEMS RESEARCH (First Peruvian Congress of Operations and Systems Research). COPIES November Lima, Peru [15]. J. Escudé, G. J. Progress in Computational Economics. (2008) Central Bank of Argentina. Argentine 42. ARGEMMy. [sixteen]. Ministry of Economy of the Argentine Republic (Department of Argentine Economy). (2009) URL: [17]. National Directorate of International Accounts of the Argentine Republic (International Accounting Center of Argentina). (2009). URL: [18]. National Institute of Statistics and Censuses of the Argentine Republic (National Institute of Statistics and Surveys); INDEC (2009). URL: [19]. Macroeconomic Monitoring Group of the Argentine Republic (Argentine Macroeconomic Monitoring Group); GMM (2009). URL: gmm.mecon.gov.ar [20].M. Godson. Microeconomics. theory. (1994); Ramón Areces Statistics Center. 01 ed. ISBN: [21]. G. Brunetti, U. Collesei, T. Vescovi, U.Sòstero. The bookstore as a business: economics and administration. (2004). Fondo de Cultura Económica Ed. ISBN: [22]. Business culture in Andalusia. A sociological study of the small business. Pablo Galindo Calvo, University of Granada. Department of Sociology, Campus de Cartuja Granada galindo@ugr.es [23].Galindo Calvo, Pablo. Sociological study of the small businessman from Granada: business culture. Granada: Editorial University of Granada 2006 [24]. Noemí Guillamón Cano. Socioeconomic variables and internalized and externalized problems in children and adolescents [25]. MANUEL ALFONSO DE LA VEGA TOLEDO. Geographical analysis of the relationship between biophysical and socioeconomic variables and their influence on poverty categories in the national territory. Rafael Landívar University. College of Environmental and Agricultural Sciences 217

254OPTICAL TUNING OF TC IN ANY SUPERCONDUCTOR S.Curatolo ** CZT Inc. Lawrence Kansas USA Abstract The mechanism of excitonic enhancement[1] has been demonstrated theoretically and experimentally by CZT Inc. as the theory of superconductivity in carrier systems of holes such as YBCO and BISCCO[ 5][6], and other novel ones such as silicon doped with boron[ y[2][3][4], because excitons are like hydrogen atoms and have excited energy levels, these Cooper pair exchange particles can be excited by suitable incoherent infrared light [5][6], resulting in a large enhancement of the superconductivity transition temperature [5][6]. It has been shown experimentally that enhancing EEM for thin film and bulk YBCO, with an optimal bulk YBCO system, and looking at diamagnetism before and after applying IR light, the transition temperature increased ~80 K, exactly as predicted by EEM theory [5] [6]. These original measurements were independently verified on an oxygen-deficient YBCO (dirty superconductor-bose-eintein condensate) thin film by measuring the transition temperature using AC voltage versus temperature measurement before and after applying IR light. It was independently verified that the transition temperature increased ~17 K, from 69.5 K to 86.5 K [5]. Because the Tc rise has been shown to be exactly equal to the value calculated by SEM theory [6], it makes SEM an invaluable tool for studying and predicting Tc rise in any stimulated cuprate-based HTS. IR inconsistent [5]. As a result of these experimental confirmations in YBCO, in this study we used EEM to predict and compare the Tc enhancement in 8-PHASES of TBCCO and Hg-TBCCO and ** Corresponding Author: Susana Curatolo, President CZT Inc. cztinc@hughes.net and Pb-Sr-TBCCO-HTS-(TBCCO-2122,2212,2213 and 2223,1212,1223,1234,1245) against YBCO tested with EEM. Since high critical temperature (HTc) superconductors such as TBBCO have important applications in many electrical and electronic devices due to the absence of resistance, low energy loss, exclusion of magnetic fields, and special quantum electronic features such as the Josephson effect, this study shows near room temperature improvement for everyone. TBCCO 4 phases and for Hg-TBCCO. Observations of the behavior of the data near the dirty limit are of great importance for the deficiencies of the SI phase. This result points to the ability to bridge the gap of the well-known obvious limitations of Tl-Cu-based-HTS. A TBBCO at near room temperature will allow less optimal thin film processes to be available, leading to a cheaper superconductor with higher Tc yield and less power demand, with more phase stability, more manufacturability simple and wider applications. 1. Introduction Superconducting systems that are hole carriers and are considered bcs as borodoped silicon [4], are in fact EEM-Excitonic Enhancement Mechanism, since their charge carriers are predominantly, and both have a conduction band state s is empty above the valence p band. Their relatively low Tc values, according to EEM, were the result of a smaller band gap and excitonic binding energies. These latter systems have some common features with cuprates, such as that their charge carriers are mainly p-orbital valence band holes, and that their conduction band is an empty s-orbital state. some cuprates, and secondly, the presence of d orbitals in cuprates, which is absent in the other systems. In cuprates, the superconducting Tc value has also been found to be correlated with the number of cuo 2 layers in their crystal structure, with a maximum reached at 3 layers, in addition to 218

255this feature, the excitonic enhancement mechanism or EEM[1] for cuprates, also possesses these features[9]. Superconducting cuprates of bismuth and thallium with structures related to the aurivillius family of oxides with tc in the 100K region also possess these characteristics. Thallium cuprates from Tl-Ba-Cu-O-2122,2212,2213,2223 systems show a Tc onset in the K region, with resistivity susceptibility behavior as high as 130K for Tl-Ba-Cu- O-2223 (optimal thin layer) sample[10]. The basic EEM is based on a two-band model, with a partially filled valence p band and an empty s conduction band, separated by a positive band gap. In addition to tc, the eem model also predicts other novel properties in both the normal and superconducting phases, such as the sign changes that were observed in the mixed-state quantum Hall effect[10], and thermoelectric power[11]. Already determined experimentally and theoretically, this specific geometric pattern can produce the following properties associated with cuprates such as YBCO[1][6] and now with other cuprates Tl-Ba-Cu-O-2122, 2212, 2213, 2223,1212 , 1223, 1234, First, the correlation of Tc with the carrier density and the number of CuO 2 layers [1] [6], as mentioned above, is therefore our intention in this article to predict that the presence of excitons in thallium-based cuprate behaves and that is the same pattern as YBCO that was demonstrated experimentally [5][6]. The contribution of excitons to superconductivity has been verified experimentally by irradiation with infrared light [5][6]. I present the analysis and calculation for the tbcco system using the eemybco model which predicts near-ambient temperature enhancement for some of the TBCCO systems. I explain the positive consequences of the improvement in terms of the known phase instabilities of the system in the poor range, the dirty limit, and in the dirty range before system collapse. 2. Theoretical analysis and calculation of tbcco using eem theory Our previous work provided a detailed discussion of eem theory [5] [6] [1], particularly the two-dimensional model [1] [5] [6] developed specifically for the cuprate systems. Since EEM has been theoretically and experimentally tested for the YBCO system [1] [5] [6], and the aim of the current article is to use the validity of this theory for YBCO and apply it to the TBCCO system, including Hg-TBCCO and Pb -Mr-TBCCO. Behavioral implications for BCS will be discussed. We will discuss our complete analysis based on the ybco model and calculate the tcu improvement for the various TBCCO systems. In total, we considered 19 superconductors using their published optimal and poor values[6][11][12][13][14][15][16]. This model is transportable through HTS-Cuprates as BiSCCO [5] [6], and the tl-ba-cu-o-2122, 2212, 2213, 2223, 1212, 1223, 1234, 1245 are about to be presented already. They are based on the same cuprate geometry. The EEM theory is essentially based on the dipole interband interaction between a partially filled intrinsic valence p band of Oxygen ions and the empty conduction s band of ba/s ions, which lie above the Cuo 2 shell. . Since excitons have similar excitation levels as phonons, we therefore expect that if a large enough population of such excited excitons is maintained, Tc can be increased [5] [6]. Therefore, the renormalized band structure results as stated in reference [1] [5] [6]. The key equations [1] [5] [6] are used in this study to perform the calculations and analyzes in the TBCCO systems. Excitons are stable if and only if their binding energy exceeds the thermal fluctuation energy that created them in the first place. If this condition is fulfilled, a global formation of localized excitons in the structure will take place, quite similar to the phonon lattice. Because excitons are bosons, when coupled to mobile holes they can replace phonons in bcs theory and lead to hole-hole Cooper pairing, thus producing superconductivity[5][6]. Consequently, to obtain the Tc based on exciton exchange, we first obtain the excitation energy of the exciton,[5][6]], which is given by 3 minus the binding energy of the exciton to the lattice. It is important to note that the excitonic binding to the lattice disappears along the cell diagonals. The space mean value of which is greater than one revolution, and is quite stable in the superconducting phase. Therefore, the lateral thermal fluctuation that destabilizes the bound exciton occurs in the normal phase. Because the excitonic levels of 219

256are within the band gap, it implies that such exciton is localized or limited to the crystal lattice. Therefore, when the exciton delocalizes, it will decay into a free electron and a free hole. Such an image for the normal phase gives rise to the two-carrier model [5] [6].] BCSgap BCS,follows for cuprate,Tes proportional to exciton excitation energy. It is precisely this feature that allows EEM to explain the dependence of Tc on the hole density[5][6]. The photoexcitation needed to excite the excitons to the first excited level is of energy[5][6]of 1 = 8 9 E o.since this photon energy is the same to excite the system from above or from belowtc,asg * 0 y 1 is of order 1ev, so the wavelength of the photon is much larger than the dimensions of the glass cell. Therefore, when we pump the system with such photons [5] [6], the electrons populated with the highest energy level are excited. There is a difference in the location of these electrons depending on the physical phase of the system. Given the proportionality of Tc in the excitation energy of exchange exciton 1 and assuming that j and N F * remain relatively constant, we can estimate the enhancement of Tc T * c totc* from the relation [5][6] of 1 T c. in the original Tc value of the sample obtained from experimental sources for our study, because the sample depends on the hole density n[5][6], through the excitonic excitation energy shown here[5][6 ], with no optimal excitonic ground state binding energy[5]given as E o 16Rm* n 2 o In our previous work on YBCO[1][5][6], the o ( ) =2.07 ev 2E o. .so, to prepare at this point, perform tbccotc enhancement calculations and compare them with our previous and proven study of infrared light enhancement in YBCO[5][6] at the critical temperature, namely Tc*, we consider two 2. separate cases. These two cases Optimal and Poor will govern the subsequent calculations of bcco exactly, we only use the Tc corresponding to the corresponding superconductor. First, with the optimal YBCO sample[5][6], and the optimal TBCCO sample, and n=n o, then Tc* ( n o ) HTSOPT =opttc =opttc*, 4 and we get, regardless of system and phase : * T HTS c OPT 1 T c = For YBCO OPT =0.1 we obtained a maximum improvement of ~90% of Tc[5][6], see Fig.2, and Fig.1 for the behavior of NOSTc only. The second case (see Fig.1): For O2 -deficient range (Table II), Dirty Limit (DL) and Dirty (Table III), (dirty-hts = maximum-defects-bcs as) YBCOdefTc=70 K[ 5][6] and for O2-deficient TBCCOs, see Fig. 1. It is now necessary to find the oxygen-deficient factor n n o foreachof )n or these samples.first. is obtained from our equation using the values ​​of opttc and deftc of the sources obtained for each superconducting system [6][10][11][12][13][14][15][16], and comparing T c n ( ) tot c ( n o ) -that is, cleverly opttc, and solving to get all the corresponding values ​​listed in Table.1. (See Fig.1). Then, substituting into e o [1][5][6], we get 2 T c *( n) T c n o ( ) = ( ) ( 2 4 )2 2( ) ( 9 1 )2. 4 Therefore, using this equation [1][5][6]we obtain for the 19 superconductors. Figs. 1a and 1b show the behavior of NOS Tc for underdoped and overdoped sample conditions, and are mirror images. boxi-ii and iii are the same for -. Figure 2 compares the Tc enhancement ratios of NOS and OS, and we clearly see the 220

257linearization under OS conditions, following the same pattern observed for the YBCO and BISCCO[1][5][6] systems, and that for overdoping is a mirror image. In fact, non-coherent infrared light (not a laser) [5] [6] is capable of stimulating Tc enhancement even when the sample is under- or over-doped in BoxII and DL, enhancing tc 20% within BoxII, <20 % within DL (dirty limit: underdoped or overdoped), and loss of enhancement in BoxIII. Since we have normalized by dividing by opttc, Fig2a,b is universal for all cuprates! So, Tc DL Tc DEF(=0.38to0.53) Tc DL(=0.53to0.57) Tc MID(=0.1to0.38) Tc MID(=0.1to0.38) Tc OPT(=0to0.1) Tc OPT (=0to0.1) Tc Enhancement Ratio Fig. 2aMirror-NOS&OSTcEnhancedRatiovs Fig.2bNOS&OSTcEnhancedRatiovs NOS & OS Tc Enhancement Ratiovs I II deftc*/opttc deftc/opttc 0.8 DL III delta-etai-ii-iii Fig.1.a Mirror -ImageNOSTcbehaviorvs Fig.1.bNOSTcBehavior vs. 5 138K 80K 4 Tc in relative units Tc vs. oxygen deficiency of the sample 137K 79.5K (I) OPTIMAL (without optical stimulation) 112K 70K (II) POOR 118K 95K BoxI 0 .1 Tc* 90 % BoxII 0.5 Tc* 20 % BoxIII 0.57 Tc*

258normalized for optimal at = (Table I), underrange and poor overdoping: ±= (Table II), dirty limit range: ±= (DL), dirty range: ±= This curve is also valid for overdoping (-). ,forthect and deftc values ​​of the bibliographical sources consulted. We can see in Fig. (1) very clearly the unenhanced [high, low] limits of Tc for each range. For Box-I: 138K, 80K-137K, 79.5K; Box-II: 112K, 70K-118K, 95K; 118K, 95K < DL < 109K, 70K; Box III > 109 K, 70 K. The deftc/opttcvs in Figure 2a,b demonstrate the poor/optimal ratio of Tc performance versus the oxygen deficient ratio of the sample. For Box I, the relationship is between , for Box II, the relationship is between , for DL, the relationship is between and for Box III, the relationship is Notice the division at = where the deficient range ends and the limit begins dirty. This is critical because it is where we observe the SI (superconducting-insulating phase) behavior that causes Tc to drop for both underdoped and overdoped. metal oxide type. Oxides are perceived in industry as inherently unstable in nature[12]. There are also manufacturing issues, intrinsic local stoichiometric defects arising from cation insertion into the wrong layer, and oxygen sublattice defects, forcing manufacturing to adjust the oxygen content at a compound-specific stoichiometric ratio to optimize superconducting properties[10]. This, as we shall see, is largely mitigated by using the incoherent near-infrared to achieve superconductivity in the SI range rather than the far-infrared stimulated Cu-O plane [17]. this method. The manufacturing advantage is obvious from the point of view of production and quality control. OS Behavior No parameter tuning is necessary, as the cuprate geometry of the band structure provides the means to calculate the deficiency ratio and Tc*, given a Tc for that superconductor. The incoherent entity is calculated based on an equation (29) from [5][6]. As excitations can be excited, they can also be destroyed. To destroy the excitons, we irradiate them with near-infrared incoherent photons of energy at least equal to E. For the YBCO system, E is about 1 ev. Since the natural relaxation of the excited state is independent of the intensity of the wavelength, and Tc* depends on the intensity of the wavelength, the key is to maintain balance. Since the optic-dependent excitation rate and the density of the excited excitons are related to each other, and with enough IR photons absorbed uniformly over the entire surface of the film of this material, tc* is achieved. Since the superconducting Tc is directly proportional to the BCS of the bcs gap, it follows for the cuprate system that the Tc is proportional to the excitation energy of the exciton[5][6]. Therefore, the deftc*/opttc ratios versus the oxygen deficiency ratio show a basically linear relationship. For BoxI the relationship is between , for BoxII the relationship is between , for DLtheratio it is between and for BoxIII the relationship is The difference in the relationships between NOS deftc/opttc and OS deftc*/optc in the order of a factor of 2 boxi, while the difference is about 1.5 for BoxII, dropping to 1.25 to land the difference in ratios for Box-III is 0.9 where Tc* is below Tc. This indicates that the enhancement responds within the predicted enhancement limits and the proportionality of enhancement to exciton energy. The difference between the proportions of NOS and OS for the different ranges can be clearly seen in that the os linearizes the response and we have the corresponding improvements of 20% for the underdoped and overdoped definition, 10% for DL ​​underdoped and overdoped compared to 90% for optimal. This represents a significant extension of the phase plot of the experimental data sources seen in Fig. 3 by so many years (1 ± 0.89), and thus substantially exceeds industry averages. We seek industrial deployment of these films given the obvious advantages in the SI compared to the literature. References [1] K.W.Wong,W.Y.Ching,Physica C, 416 (2004) 47 [2]I.N.Makarenko,D.V.Niforov,A.B.Kykov,O.K. Mel nikov, s.m.stishov, Anisotropy of electrical resistance of single crystals from Htc YBCO, Pis ma Zh. eksp. theory. Fiz. No.1, 52-56, 1988, USSR. [3] J.G.Bednorz and K.A.Müller, Perovskite-type oxides: the new approach for 222

259HTcSC, Rev. Modification. Phys.Vol.60,No.3,July 1988 [4]Cava Lab: Intermetallic Superconductor research [5] Susana Curatolo, Kai Wai Wong,-USPatent No ,IPC8 Class:AH02H700FI,USPC Class: ;Title: Method to operate a superconductor at a temperature above the superconducting temperature Tc. [6] S.Curatolo,K.W.Wong,Progress in Superconductivity-EEM: The Exciton Enhancement Mechanism Theory and Experimental Evidence of Optically EnhancedTcinHighTc Superconductors.ISBN, Nova Science Publishers Inc. Oliver Chang- Editor [10] Hott, Roland; Narlikar,A.V.-High Superconductivity 1-Materials;Table-1, page 4, Springer, September 2004, Berlin. Forschungszentram Karlsruhe, Institut für Festkorperphysik; P.O.Box 3640, Karlsruhe Germany [11]Rao, C.N. High Temperature Ceramic Oxide Superconductors, Sadhana, Vol 13, Parts 1 & 2, Jul 1988, pp table 10, p.29. [12] S.A. Sunshine and T.A. Vanderah. Preparation of cuprate superconductors based on bismuth and thallium. Research Department, Chemistry Division, Naval Weapons Center, ChinaLake, CA93555 Office of Naval Research. Grant Contract N WX , R&T CODE ,TECHNICAL REPORT NO.2 Published in Chemistry of Oxide Superconductors by T.A. Vanderah. Noyes Publications, Table 1, p.5 [13] DuPont Superconductivity: High Temperature Superconducting Josephson Junction Device Technology Development: Final Report January 1996 to January 1998 Program Manager Kirsten E Myers; In situ deposition Oxides containing thallium. K.E.Myers Du Pont Experiment Station CR&D-E304/C110 Wilmington DE, USA; page 168/SPIE Vol. 2697: Sections 4.3 and 4.4. [14]-B.R.Xu, Y.Xin, G.F.Sun, K.W.Wong, XII-International.Conference.on Thermoelectrics; Phys.Lett. A 192(1994)p [15] F. Gouternoire et al, Substitution of Hg by Tl in Hg-Tl-BCCO-2223, Solid State Communications 90(1,)Apr-1994, p [16]Hur,YongHiPark,KJPark ,Jong C.(Daejeon) US Patent No., 1995 [17] Vaglio, Ruggero The New Italian CNR Institute SPIN, CNR-SPIN/IEEE-CSC&ESASEEuropean-Superconductivity-News-Forum-(ESNF), no º 12, April 2010, p.

260Wet and dry abrasion behavior of AISI 8620 Boriding steel. I. Hilerio C. 1, M.A. Barron M. 2, R.T. Hernandez L 3, A. Altamirano T 4. Metropolitan Autonomous University Azcapotzalco Unit Department of Materials. AV. San Pablo 180 Col. Reynosa-Tamaulipas Mexico, D.F. Tel ext ihc@correo.azc.uam.mx Email 2 bmma@correo.azc.uam.mx Email 3 hlrt@correo.azc.uam.mx, Email 4 aat@correo.azc.uam. mx ABSTRACT Wear tests were carried out on two different equipment, designed by the UAM A Tribology group in Mexico under ASTM G-65 for dry conditions and ASTM G-105 for wet conditions. Due to the fact that the standards use different parameters, it was necessary to match them for both tests, such as: load, sliding distance, linear speed at the point of contact, granulometry and hardness of the wheel. The steel used in this document is used in machine components. In order to observe the benefit of the boration process on AISI 8620 steel, a comparison of this steel with the base material and the same with the boration process was made. The wear rate is significantly lower in wet conditions than in dry conditions; This is because the water acts as a lubricant or coolant and hides the abrasion mechanisms. The surface of the boride sample is very hard and therefore with little mass loss; Both tests presented a good behavior to abrasion. Keywords: Abrasive wear, dry abrasion, wet abrasion, boride. INTRODUCTION Abrasive wear is the result of three mechanisms; ploughing, cutting or wedge formation; This phenomenon occurs when hard rough edges or abrasive particles present a sliding process and interact with a surface.[1-2]. Abrasive wear is about 50% of all wear in the industry; due to the interaction of at least two bodies and on many occasions also to the presence of a fluid. For this reason, the tests have been evaluated in dry and wet conditions. ASTM G-65 titled Standard Test Method for Measuring Abrasion Using the Dry Sand/Rubber Wheel Apparatus; where it describes the configuration and establishes the operating parameters of the tribometer in dry condition. On the other hand, with respect to the wet test, it is described by the ASTM G-105 standard, with the title Standard test method for carrying out abrasion tests on rubber wheels/wet sand. When the operating parameters have been similar, it is possible to make a comparison between both tribosystems. AISI 8620 steel is a hardenable nickel-chromolybdenum low alloy steel; Widely used in machine components. Supports different surface treatments and forming processes; such as cementation, tempering, forming, welding, boride, etc. [3]. Heat hardening is produced by heating a material, followed by rapid quenching in oil or water. With this operation the surface of the sample has been hardened. There are five methods used to heat steel; by electrical induction, resistance, flame, laser and electron beam. [4,5]. The boride process is a thermochemical treatment, when boron is diffused into a metal surface, but this process can also be applied to non-ferrous metals such as nickel, cobalt alloys, and refractory metals. This can be done in a container, liquid or gas; with a temperature between 900 and 1100 C; Boride steels exhibit extremely high hardnesses, commonly from 1500 to 2300HV, obtaining a thin layer between 50 and 150 µm. The morphology of the boride layer formed on iron is of two types: an upper FeB layer and a lower Fe 2 B layer. The second layer is mainly distributed at the grain boundary. Both present a similar orientation and have columnar structural features towards the surface [4,6]. EXPERIMENTAL PROCESS Preparation of the material Three groups of specimens were made in AISI 8620 steel; base, heat hardening and boride. The first group had no process; they were cut from a rectangular shaped bar, 25.4 mm wide by 57.2 mm long by 12.5 mm thick. The dimensions are the same for all specimens. Heat hardening was induction processed at 900 C, for 30 minutes with a flow of nitrogen at 10 ft 3 /hr and oil quench. After this, a tempering treatment was carried out at 200 C, for 2 hours. 224

261The erasure process was carried out using a salt bath, where the specimens were placed in a pot and covered with boron salt; this complex was heated to 950 C, for four hours. The hardness after this process is 850 HV with a load of 100 gf, for 10 s. Abrasive wear tests The tribometers have been developed by a group at UAM A, in accordance with standards. Figure 1 shows both types of machines in schematic form. This equipment was used to study the behavior of abrasive wear in dry and wet conditions. The following parameters were used for both tests. The abrasive used in this work is called µm grain quartz sand. Load used 200N between wheel and test piece. Sliding distance of 5586m. With a rotation speed of 250 to 250rpm. The dry test required an abrasive flow of 0.3 to 0.4 kg/min; on the other hand, the wet test used a 1.5 kg mixture. of abrasive and kg of water. Before starting the tests, the specimens were cleaned and weighed with an analytical balance with a precision of g. During the entire test there were 5 times to weigh the difference in mass. Fig. 2: Micrograph of quenched and tempered 8620 steel. The boride layer morphology formed by iron has two types of layers: an upper layer formed by FeB and a lower layer formed by Fe2B. The second layer is mainly distributed at the grain boundary. Both present a similar orientation and have a columnar structural characteristic towards the surface. A cross-sectional metallography is performed with a chemical reagent (nital 2%) for 6 seconds, shown in figure 3. Fig. 1 Schematic of the tribometers. a) in humid conditions; b) in dry conditions. Fig. 3: Micrograph of steel for boride RESULTS The structure of AISI 8620 after hardening is shown in figure 2, by means of a metallurgical preparation for cross section with a reactive nital (2%) for 6 seconds. Where it has presented a hardness of 400HV; with a load of 100gf for 10s. The evolution of the hardness curve obtained with this process is shown, as shown in figure 4. Compared to the current literature, this material is very soft, because the sample has had a heat treatment after boriding. 225

262Fig. 4. Hardness profile for boride steel Images 5 show wear marks obtained with metallic test pieces for all steels: base (a), heat treatment (b) and boride (c). Fig 6. Comparison of behavior for both wear test conditions. In humid conditions, the effect is less severe than those measured for dry conditions. This result is explained because a protective film of water is formed in the aqueous tribosystem. Consequently, this induces, firstly, a difficulty for the abrasive particles of silica sand to penetrate and, secondly, an easier sliding of the sample on the metal surface. For this reason, the wear of the scars is less severe. CONCLUSIONS Based on the result obtained, it is possible to point out that: The wear behavior of a metallic material is modified with a change in its surface structure. This relationship has been clearly shown in this work. This study has shown the behavior of steel obtained by different surface treatments depending on the structure. The wear resistance property of AISI 8620 steel has been ranked in this order: 1) AISI 8620 base steel as the least resistant 2) AISI 8620 steel hardened by quenching and tempering 3) AISI 8620 borided as the most resistant treatment Figure 5: Wear abrasive scars for both conditions. The abrasion behavior of AISI 8620 boride steel for both conditions presents a higher resistance to abrasion, as shown in figure 6. It is possible to notice a benefit with heat treatment. Boride 8620 steel has excellent properties for both test conditions, but performs better in wet conditions. Therefore, it can be established that borided steel is a useful tool to work in this type of conditions and resistance to wear. 226

263REFERENCES 1. K.Kato, Wear Mechanisms, Guest and Plenary Paper in the American Society of Mechanical Engineers Journals, (1997). 2. J-L. Bucaille and E. Felder, The scratch test in polymers and metals. Modeling and experimental approaches, in Materials and Techniques, n 3-4 (2001). 3. T.G.Mathia, M.Ouadou, F.Saucez, State of surfaces and sclerometry, Engineers and industry, n 25, 45-48, (1991). 4. E.Bergmann, H.Matthieu, R.Gras, Surface analysis and technologies, vol.4, Romande Polytechnic and University Press, Ed. Eyrolles (2003). 5 Y.Ono, C-Q.Zheng, F.Hamel, R.Charron, C-A.Long, Experimental Investigations on Monitoring and Control of Induction Heating Process for Semisolid Alloys Using Heating Coil as Sensor, Science and Technology of measurement, 13, , (2002) 6 H.Bhushan, E.Gupta, Manual of tribology: materials, coatings and surface treatments, Ed. Mc Graw Hill, (1991). 227

264Study on the Influence of Cutting Parameters on Cutting Forces and Shape of Austempered Ductile Iron (ADI) Chips Chihong Wang, Xuhong Guo, Wei Wang, Qu Dong School of Mechanical & Electrical Engineering, Suzhou University Suzhou, China ABSTRACT An accurate dry cutting experiment was performed on Austempered Ductile Iron (ADI) with CC650 ceramic cutting tools. A triaxial piezoelectric dynamometer coupled to a multichannel charge amplifier was used to measure and acquire shear forces. The effects of the cutting parameters on the main cutting force were analyzed separately by means of the Orthogonal test and the Fuzzy logic tool in Matlab Toolboxes, including the optimization of the cutting parameters and the establishment of the empirical formulas of the cutting forces. . Further exploration of chip shapes was performed as cutting parameters vary. The results showed that: the depth of cut had the main influence on the cutting force followed by the feed rate and the cutting speed in turn. The cutting force fluctuated by the formation and fracture of the erratic accumulated edge (BUE) as the cutting speed increased, decreased as the BUE increased and increased when it disappeared, finally remaining almost constant. Cutting force increased linearly with increasing depth of cut and feed rate. The chip shape was mainly determined by the cutting speed and the feed rate, going from crack-shaped and C-shaped chips to rolled chips with increasing cutting speed, while varying from rolled chips to C-shaped chips and rewound as feed rate increased. Keywords: Austempering Ductile Iron (ADI), Cutting parameters, Cutting force, Chip shape 1. INTRODUCTION Austempering Ductile Iron (ADI) can obtain excellent mechanical properties from synthesis attributed to its unique microstructure consisting of austenite and ferrite with high carbon content, due to the application of heat treatment of austempering. ADI has become one of the most concerned hot materials and technologies of the 21st century. Many studies have been done to investigate the machining of ADI. Panasiewicz [1] investigated that the machine spindle made by ADI has worse mechanical properties than the one made of common iron. Chang [2] from the Center for Advanced Technology in Michigan, USA, further studied that the poor mechanical property is attributed to the austenite retained in the microstructure, and it transformed into martensite, which decreased the mechanical property when force was exerted. cutting on ADI. This point was also endorsed by four researchers Gundlach, Pashby, Berry, and Seah. Chen Ping and Keishima [3,4] from Japan studied ADI high-speed machining, analyzed the materials and cutting characteristics of a series of cutting tools, and found that the cemented carbide tool was not suitable for cutting ADI materials. and pottery. and the CBN tool was not sensitive to cutting speed. K.Katuku [5] investigated wear, cutting forces and chip characteristics when dry turning ASTM Grade 2 austempered ductile iron with PCBN cutting tools under finished conditions (depth of cut: 0.2 mm; speed feed rate: 0.05mm/r; cutting speed: 50~80m/min), pointed out that flank wear and crater wear were the main wear modes within the cutting speed range, abrasion and thermally activated wear were the main wear mechanisms. Hongtao Zhang [6] carried out a cutting force experiment with four kinds of polycrystalline cubic boron nitride compact tools at ADI in order to obtain the cutting force, friction force and friction coefficient curves by changing the cutting speed. cutting (depth of cut: 0.3 mm; feed rate: 0.16 mm/r; cutting speed: 57, 89, 141 m/min). They found that the friction force and coefficient of friction of the low-content CBN compact were less than those of the high-content CBN compact and that those of the ceramic bond compact were less than those of the metal bond compact. PCBN tools were not suitable for cutting ADI with massive Al in the binders, as the frictional force and coefficient of friction were constantly increasing along with cutting acceleration. This document will focus on the experimental studies of cutting forces and chip shape of ADI with ceramic tool in finishing conditions, it will establish the empirical formulas between the cutting parameters and the cutting forces, it will optimize the cutting parameters , you will discuss the influence of cutting parameters on cutting forces and look at the chip. shape with changing cutting parameters. 2. CUTTING EXPERIMENT Preparation and performance of experiment materials All the experiment materials were made at CSR Qishuyan Research Institute of Locomotive and Rolling Stock Technology. The ductile iron used in this experiment was made into a cylindrical barrel shape with Φ600mm 400mm. The chemical composition (mass percent) of ADI before the austenitized quench treatment is shown in Table 1. The heat treatment program involved austenitization at 890 for 120 min followed by quenching in a NaNO 3 salt bath at 350 for 60 min. The microstructure consisting of retained stringer-like austenite and ferrite needles can be seen with the aid of an electron microscope (EM). The mechanical behavior of the ADI material is listed in Table

265Table 1 chemical composition of ductile iron [% by weight] chemical elements % by weight C 3.6 Si 2.85 Mn 0.3 P 0.1 S 0.03 Mg 0.02 Re 0.02 Al 0.05 Ti Cu 0.6 Mo 0.3 Cr 0.1 Table 2 ADI mechanical properties ADI mechanical properties Tensile strength (mpa) 1395 Yield strength (mpa) 900 Elongation (%) 2.75 Compact ductility (J/cm 2 ) 35 Hardness (HRC) 42 Comparison was made on ductile iron before and after austempering heat treatment by metallographic testing. Fig. 1 and Fig. 2 showed that the morphology of the graphite had hardly any differences, while the microstructure consisting of ferrite needles and austenite retained in the form of a spar of the raw mixture of ferrite, pearlite and austenite. (a) morphology of cast iron graphite (b) microstructure of cast iron Fig.1 the metallographic image of 100 ductile iron (a) morphology of ADI graphite (b) microstructure of ADI Fig.2 the metallographic image of tempered ductile iron 100 Experiment Method This cutting experiment was performed on a CA6140 lathe. The cutting tool adopted was CC650, a mixed ceramic based on alumina (Al 2 O 3 ) with titanium carbide (TiC). According to the ISO standard, the tool was designated as SNGA, square (manufacturer: Sandvik). The geometry of the cutting tool was listed as follows: rake angle, γ 0 = -6; flank angle, α o =6 ; inclination angle, λ s =-4 ; edge angle, κ r =75 ; minor cut angle, κ r = 15; the radius of the corner r ε = 0.8 mm, the ground width of the first carab r1 γ 01 = 0.1 mm (-26). To measure and acquire the cutting forces, a data acquisition system (Fig. 3) was used, consisting of a Kistler 9257B dynamometer, a load amplifier, a Kistler 9403 tool holder, and a PC. In this experiment, the multi-factor method was taken with the selected cutting parameters as follows: factor A, cutting speed v c = 81.6, 163.4, 261.4 m/min; factor B, feed ratef=0.08, 0.12, 0.16 mm/r; factor C, depth of cut at p = 0.05, 0.1, 0.15 mm. Measure the shear forces as much as possible and take the mean value in order to reduce the experimental error. The analysis continued with the help of the orthogonal test and the Fuzzy logic tool in Matlab Toolboxes. Fig.3 Shear forces data acquisition system 3. RESULTS AND DISCUSSION Empirical formulas of shear forces The shear parameters also had a cooperative effect on shear forces in addition to acting separately. An interacted orthogonal table was used to calculate the measured data, and the results showed that the comparison of the F value with the marginal value revealed the high importance of cutting speed, feed rate and depth of cut and the importance of the cross action of A BB C. Depth of cut had the main influence on cutting force, followed by feed rate and, in turn, cutting speed. The following recommended cutting parameters taking into account the K value under such experimental conditions, the productive efficiency and the useful life of the tool when cutting ADI in finishing conditions: cutting speed, 163.4 m/min; feed rate, 0.16mm/r; depth of cut, 0.15 mm. Two types of empirical formulas can be chosen to calculate shear forces in practice, (1) exponential formula; (2) Calculate the cutting force per unit. The empirical formulas are shown in Table 3, involving three force components: principal shear force f c (F z ); radial thrust force F p (F y ); axial thrust force f f (F x ). shear forces Table 3 ADI empirical formulas empirical formulas principal shear forcef c (F z ) Fc = ap f v radial thrust forcef p (F y ) F = a f v p axial thrust forcef f (F x ) F = a f v f The Table 3 illuminated that the cutting forces increased with the growth of the cutting depth and the feed rate as a result of these positive exponents, decreased with the increase of the cutting speed for its negative exponent. The depth of cut had a greater influence than the feed rate, since the depth of cut had a larger exponent, which was in agreement with the results of the orthogonal test. Comparing with the results of [7] that cut hardened and p p 229

266quenched steel with cemented carbide tool, showed the similarity between quenched and quenched steel and ADI that cutting forces increased as depth of cut and feed rate grew with their positive exponents, decreased with increasing speed cutoff due to the negative exponent. Influence of cutting speed on cutting forces A single factor experiment was set up to further explore the influence of cutting speed on cutting forces: feed rate f=0.12 mm/r; depth of cut p = 0.2 mm; cutting speed vc =40, 45, 50, 70, 90, 120, 150, 180, 200, 250, 280m/min. Figure 4 is the curve of the cutting force f c against the cutting speed v c with the corresponding images of the shape of the chip. Figure 5(a),(b) presented that the cutting force increased almost linearly with the growth of the depth of cut and the feed rate, which determined the width and thickness of the chip separately. The workpiece (autempered at 350 for 1 h) consisting of a spar-like retained austenite and ferrite needle had both thermodynamic and dynamic stability. No martensite was produced by cutting ADI, which had identical views with [9]. Figure 5(c) showed the three-dimensional surface of the relationship between feed rate, depth of cut, and cutting force modeled by fuzzy logic. The cutting force increased at a certain speed with increasing depth of cut, while at a slower speed with increasing feed rate. Fig.4 Fig.4 indicated that the main cutting force decreased with increasing cutting speed up to a speed of about 50 m/min. It was the built edge (BUE) formed at the cutting edge that causes the tool to have a larger true angle of attack and less chip deformation. The deformation coefficient would be lower with the growth of the cutting speed. The BUE peaked up to a speed of 50 m/min with a minimum cutting force of about 139 N, and the chip shape became C-shaped. Between speeds of 50 and 90 m/min, it increased by some extent for two reasons: (1). The rake angle returned with BUE fracture as cutting speed increased and chip deformation increased; (2). The interaction between strain hardening and thermal softening resulted in increased hardness. The main cutting force decreased at a low rate when the cutting speed is higher than 90m/min, mainly due to the heat in metal cutting. The coefficient of friction between the cutting tool and the workpiece decreased with a higher cutting angle Φ and a lower coefficient of deformation Λ h. The chip shape changed from a highly rolled chip to a C-shaped chip gradually. Conformity can be concluded after comparing with [8] who milled steel No. 45 with a ceramic tool LT55. Influence of the depth of cut and the feed rate on the cutting force To deepen the influence of the depth of cut and the feed rate on the cutting force, an experiment was carried out: cutting speed v c = 163.4 m/min; depth of cut at p = 0.05, 0.1, 0.2 mm; feed rate f=0.08, 0.12, 0.16 mm/r. Figure 5 shows the relationship between the depth of cut, the feed rate and the cutting force Fc. Cutting force curve Fc vs. cutting speed vc (f=0.12mm/r,a p =0.2mm) (a) (b) depth of cuta p vs. cutting forcef c Feedf vs. cutting forcef c 230

267cutting speed, and went from highly rolled chips to C-shaped chips, rewound with increasing cutting feed, while varying from crack-shaped and C-shaped chips to rolled chips as cutting speed increased. court. 5. ACKNOWLEDGMENTS The authors thank the Suzhou Office of Science and Technology for supporting this research. (c) Three-dimensional graph describing the relationship between feed rate, depth of cut and cutting force (v c = 163.4 m/min) Fig. 5 Influence of depth of cut and feed rate on the cutting force Fig. 6 shows the chip shape when cutting ADI with labeled cutting parameters. Chip shape was mainly determined by feed rate and cutting speed (Fig. 4), and changed from highly rolled chips to C-shaped, rewound chips as cutting feed increased, while varying from crack-shaped and C-shaped chips to spiral chips. chip with increasing cutting speed. The depth of cut had little effect on the chip. Fig. 6 chip shape when cutting ADI (v c = 163.4 m/min) 6. REFERENCES [1] Pashby I R, Wallbank J: Ceramic tool wear when machining austempered ductile iron. Wear, Vol, No.1, April 13, 1993, pp. [2] Goldberg M, Smith G T, Berry J T, Littlefair G: Evaluation of machinability and surface integrity characteristics of hardened ductile iron (ADI) using ultrahard cutting tools. 3rd International Conference on Machining and Grinding, Cincinnati, Ohio, USA, Oct 4-7, 1999 [C] [3] Cakir, M. Cemal, Bayram, Ali, Isik, Yahya, Salar, Baris: The Effects of tempering temperature and time on the machinability of austempered ductile iron. Materials Science and Engineering, Vol.407, No.1-2, October 25, 2005, pp [4] Seker, Ulvi, Hsirci, Hasan: Evaluation of the machinability of hardened ductile irons in terms of cutting forces and quality superficial. Journal of Materials Processing Technology, Vol.173, No.3, April 20, 2006, pp [5] K. Katuku, A. Koursaris, I. Sigalas: Wear, cutting forces and chip characteristics when turning on dry ASTM Grade 2 ductile austempering iron with PCBN cutting tools in finished condition. Journal of Materials Processing Technology, Vol.209, No.5, March 1, 2009, pp. [6] Zhang Hong-tao, Li Hai-bo, Dong Hai, Li Man: Shear properties in cutting hardened ductile iron by compact PCBN tools in Chinese. Materials for Mechanical Engineering, Vol.32, No.8, Aug 2008, pp. [7] Xu Jin, Ye Bang-yan: Study on high-speed cutting of hardened steel with clad insert (CN35) in Chinese[d]. South China University of Technology, Guangzhou. October 1, [8] Ai Xing. Chinese High Speed ​​Machining Technology [M]: National Defense Industry Press, Bei Jing [9] Deng Hongyun, Wang Chunjing, Zhang Zhou: Examples of Austempered Ductile Iron Production and Application in Chinese. Chemical Industry Press, Bei Jing CONCLUSIONS (1) When cutting ADI with the CC650 ceramic tool under finishing conditions, the depth of cut had the main influence on the cutting force followed by the cutting feed, the cutting speed at its time with recommended cutting parameters: depth of cut speed, 163.4 m/min; feed rate, 0.16mm/r; depth of cut, 0.15 mm. (2) The cutting force fluctuated by the accumulated edge erratic (BUE) as the cutting rate increased, decreased as the BUE gradually formed, and increased as it fractured, eventually remaining nearly constant. Cutting force increased almost linearly with increasing depth of cut and feed rate. (3) The shape of the chip was mainly determined by the feed rate and 231

268Analysis and Research of the Influence of Advanced Firing Angle on Engine Emissions as a Function of Fuel Quality Li Jun School of Mechantronics and Automotive Engineering, Chongqing Jiaotong University Chongqing, China and Zhang Shiyi School of Marine, Chongqing Jiaotong University Chongqing, China and Yang Lizhong China Marine Bunker (PetroChina) Co. Ltd QinHuangDao Branch Hebei, China ABSTRACT Due to the difference of fuel components and quality in China, this paper discusses and analyzes the results of fuel components and quality and experiments in China. By using ordinary fuel from HuaZhong and XiNan districts, the CO 2 HCNOx emissions of the engine are tested, and the emission change of the engine is studied when the advanced firing angle is properly adjusted. The engine can run smoothly and the HC and NOx emissions can be reduced when the advanced firing angle is properly reduced compared with the amount of aromatic hydrocarbons of the finished fuel which is partly larger, such as in HuaZhong district. Keywords: Fuel Quality; vehicle engine; engine emission; Advanced Ignition Angle 1. INTRODUCTION Vehicle emissions are gradually becoming one of the main air pollutants in the city. The components and the amount of emissions from the engine are directly influenced by the quality of the fuel, better or worse. Due to the differences of the crude oil components in the China refinery, the finished fuel is different from each other in each district. This paper investigates and studies the emissions from vehicle engines based on different fuel qualities in China. And it is discussed how to control and reduce vehicle emissions by adjusting and changing some engine parameters to differentiate the quality of fuel and components[1-3,5]. According to research, the production of China Petrochemical Corporation and China National Petroleum Corporation accounts for approximately 90% of all finished fuel in China[2,3]. And due to the difference of refining technology and purification process, there are some differences in the quality of fuel and components in different districts of China. From the analysis of gasoline and fuel quality of some refinery or gas station in sixteen cities in 7 districts in China, we know that RON (research octane number) is up to the demand basically, but MON (research octane number) engine) about 20% in 93# is lower. The violent resistance index of the fuel is not dissatisfied with the Chinese standard. And the alkene of the finished fuel in China is quite higher than in America and Japan, about 30%~50% commonly and the amount of sulfur in China is less and aromatic hydrocarbon and 232

269Benzene hydrocarbons are also less than imported fuel. Therefore, it is important to investigate and control the engine emissions based on different components and fuel qualities, and it has become a key problem how to adjust some performance and structure parameters of the engine and control the combustion process to reduce the emissions. engine emissions. 2. ANALYSIS AND MATHEMATICAL MODELS According to the characters and the combustion process of the engine, the ..-e indicates that the consumed percentage x of fuel that is burned together with the variation of time is [4]: ​​x f ( t ) 1e (1) 1 f ( t) n dt (2) k 0 t m0 (3) Where: n is the ratio coefficient between the number of molecules of the efficient reactive substance and the number of molecules of the reactive substance , is the relative density, m is the fuel quality index, k 0 is the proportion coefficient. Then put equation (3) in (2) and the integral transform: 1 m k m1 f (t) nk0t dt t (4) m 1 0 Where: k nk 0. And put equation (4) in (1). Then: 1 exp k m m1 x (5) t 1 Commonly the m= for gasoline engine[4]. We know from experience that fuel does not burn completely. In general, if fuel is burned above 99%, it can be considered a complete burn. Then, the consumed percentage x of fuel burned together with the time variation is: t 1 exp 6.908( ) t z m1 x (6) Where: m is the fuel quality index; t z is the presumed continuation time of the recording process; t t is burning time; thet z is the relative burning time. t However, the relative combustion time t z can be replaced by the relative crank angle ratio. x is a crank angle over the continuation time of combustion and x 0; the is a crank angle; 0 is advanced firing angle; z is a crank angle over the continuation time of the entire recording process. Then: x 1 exp 6.908( z 0 m1 ) (7) The quasi-experience equation on the combustion d x speed d of the gasoline engine is: dx m ( ) d z exp 6.908( z z m 0 m1 ) z (8) Therefore , we can understand that the combustion rate of the gasoline engine is a function of the fuel quality and the advanced ignition angle and the continuing combustion time or the relative angle of the crankshaft of the continuing combustion time. According to the above equations, we can know that the combustion rate of the gasoline engine can directly result in the combustion process. The quality of the fuel can affect the rate of combustion. Then, the combustion process can influence engine emissions and the amount of pollutants[4,5]. Therefore, fuel quality can influence engine emissions. Therefore, we adjust and change some running parameters of the gasoline engine appropriately, and then the combustion rate of the engine can be changed, and thus the emissions of the engine can be reduced and controlled [6-12]. 233

2703. EXPERIMENT In order to investigate the influence of engine emissions based on different fuel quality, we have completed experiments on engine combustion and emissions using fuel from XiNan County and HuaZhong County in China. . Analysis and discussion of the influence of fuel quality on emissions from gasoline engines has been done by changing and adjusting the advanced ignition angle appropriately. We use JL465Q series engine. Its displacement is 1.0 cc and the compression ratio is 9.5:1. The exhaust analyzer is MEXA-324J. The exhaust test instrument is BOSCH ETT855 and the air-fuel ratio instrument is MEXA-110. The fuel is 93 gasoline from XiNan and HuaZhong counties. In the experiment, the rotational speed of the motor is 850 to 3500r/min (rated speed is 2200r/min). The engine intake temperature is constant. The engine cooling water temperature is controlled at 85 and the lubricating oil temperature is controlled at 90. The engine load is around 85% to 100%. The experiment is completed within two sets of different parameter conditions against the amount of sulfur; alkene; aromatic hydrocarbon and benzene hydrocarbon from the gasoline of the two different districts. We investigate the variety of engine emissions based on different fuel quality by properly adjusting some engine parameters. And we analyze and discuss the change of the exhaust number and the method to improve the emission. 4. RESULTS AND ANALYSIS In order to investigate the influence of the advanced ignition angle on engine emissions as a function of fuel quality, we adjusted the advanced ignition angle appropriately from 0 = 10 to 0 = 15 in these experiments. The exhaust and emission change is shown in Figure 1-3. Its amount of aromatic hydrocarbons in the fuel in HuaZhong County is higher than in XiNan, although its amount of alkenes in the fuel is close to that in XiNan. And it is about the highest in China. From Fig. 1, we can know that the CO2 emission from fuel burning in HuaZhong is clearly higher than in XiNan, because the aromatic hydrocarbons contain some benzene hydrocarbon in the fuel from HuaZhong district and the amount of hydrocarbons aromatics is higher than in XiNan district. . When the advanced firing angle is increased, the engine fire delay time in the combustion process is advanced and the CO2 emission can be reduced in part. It is very visible in running the engine at r/min until the CO2 is reduced. Its CO 2 emission number is close to the emission number of Xinan County where aromatic hydrocarbons are lowest. Therefore, the CO2 emission can be reduced when the advanced firing angle is increased appropriately against some fuel containing higher aromatic hydrocarbons. As shown in Fig. 2-3, we can know that HC is partially increased and NO x is partially decreased, when the forward firing angle is increased. There is a small discrepancy between each other in the two districts. But the P max and the rate of increase in pressure increase markedly in the acute combustion stage, when the advanced ignition angle is increased and the engine runs rough and does not run smoothly. Due to the higher octane rating of the aromatic hydrocarbon, the power consistency of the fuel is greater. When the fuel contains a higher aromatic hydrocarbon, it causes the gasoline engine to run rough and rough, and it causes vibrations and high-pitched running noises. For a larger amount of aromatic hydrocarbon fuel, when the advanced firing angle 0 is increased beyond some value, the NOx emission of the engine is markedly reduced and changes slowly. But it gets very bad and not prominently smooth to get the engine running and running. It is shown in Fig.4. The operation of the engine is greatly influenced. Therefore, to decrease CO 2 and NO x emissions, we need to appropriately increase the forward firing angle, but not exceed a certain number. Otherwise, the decrease of CO 2 and NO x emissions is not important, and then the HC emissions increase remarkably, and the engine running is bad and not smooth, and the vibration and noise of the engine running is terribly serious. 5. CONCLUSIONS Due to the difference of components and fuel quality in China, the engine emissions are different from each other. For a larger number of fuel aromatic hydrocarbons, the CO 2 and NO x emission can be reduced when the advanced firing angle is increased appropriately. But when the 0 angle exceeds more than some number, it is not marked to reduce CO 2 and NO x emissions. But the HC 234

271Huazhong before adjusting Hauzhong after adjusting Xinan before adjusting Xinan after adjusting Huazhong before adjusting Hauzhong after adjusting Xinan before adjusting Xinan after adjusting Fig.1 relationship of advanced ignition angle with CO 2 emission Fig.2 relationship of advanced firing angle with HC emission Xinan after adjusting Xinan before adjusting Huazhong after adjusting Hauzhong before adjusting Fig.3 relationship of advanced firing angle with NO emission X 0 0 Fig. 4 relationship of advanced firing angle 0 with NO X emission using the fuel from HuaZhong district the emission is observably increased and the engine running is poor and not smooth. And the vibration and noise of the running engine gradually become more furious. So we adjust the forward firing angle 0 appropriately and not beyond the allowed number. 6. ACKNOWLEDGMENT The work was supported by Education Natural Science Foundation Project of Chongqing EDUC (KJ090408), China and Foundation Project of the Key Laboratory of Chongqing Communication Engineering (2008CQJY002) and Education and Teaching Reform Project of Chongqing CSTC, China (No ) 7 REFERENCES [1] Wang Jieqing. Analysis of the Quality and Update of the Gasolines of National Purity. Proceedings of the Tenth Conference of the Committee on Fuel and Lubricating Oil of the Society of Automotive Engineers of China [M] [2] Zhang Yongguang. Countermeasure of fuel components in China against the protection requirements of circumstances. Minutes of 235

272Purity Fuel and Circumstance Protection Proseminary [M]. Beijing: 1999 [3] Li Jian. Research on engine emissions performance based on fuel character. Chongqing University Master's Dissertation[D]2005 [4] Zhang Zhipei. Principle of the automotive engine [M]. China Communication Press, Beijing:2003.03 [5] Li Jun, Zhang Shiyi. Research and Research on the Influence of Excess Air Coefficient on Engine Emission Based on Fuel Quality[J]. Automotive Engineering, (3) [6] Li Jian, Qin Datong, Han Weijian. Research on the effects of fuel properties on vehicle emissions [J]. Journal of Chongqing University (Natural Science Edition), 2005, 28(7):4~8 [7] Zhang Lijun, Yao Chuanfeng. Investigation of the Influence of the Quality of the Fuel on the Emissions of Particles and Polycyclic Aromatic Hydrocarbons from Diesel Engines [J]. Chinese Internal Combustion Engine Engineering, 2000(4):35~38 [8] Guo Hejun, Fang Maodong. Research on the effects of gasoline qualities on vehicle exhaust emissions [J]. Journal of WuHan University of Technology, 2005, 27(9):64~66 [9] Lin Huaiwei, Chen Xiaolong. The influence of gasoline quality on automobile emissions [J]. World Automobile, 2000(4):18~20 [10] Cheng Yong, Fu Tieqiang. Status Quo and Analysis of Evaporative Emissions from Gasoline Light Domestic Vehicles, 2002, 24(3):182~186 [11] Kihyung Lee; Chang Hee Lee. An experimental study of the extent of the operating region and emission characters of stratified combustion using the controlled autoignition method. Energy and Fuels 2006(20): [12] CHLee; K.H. Lee. An experimental study on the combustion and emission characteristics of a stratified charge compression ignition (SCCI) engine. Energy and Fuels 2007(21)

273SHM BASED ON SCANNING SINE WAVE FOR SHORT COMPOUND PIPES Ming LI Gurjiwan SINGH Gurjashan SINGH Alberto GARCIA Ibrahim TANSEL Mechanical and Materials Engineering, Florida International University Miami, FL33174, USA Mustafa DEMETGUL College of Technical Education, Marmara University Istanbul , Turkey and Aylin YENILMEZ Mechanical Engineering, Istanbul Istanbul Technical University, Turkey ABSTRACT Composite tubes have been widely used to build the structures of unmanned aerial vehicles (UAVs). Monitoring the integrity of their structures would improve the safety and reliability of small UAVs. In this study, a swept sine wave was used to drive the carbon fiber tubes of a small four-rotor helicopter. Piezoelectric elements attached to the tube acted as an actuator and sensor. The actuator created the Lamb waves on the surface of the structure. Experimental data was collected using a free tube and after placing a clamp at the different locations of the tube. The S-transforms of the sensory signals were used for analysis. Backpropagation type artificial neural networks were used for classification. Keywords: structure condition monitoring, sine wave sweep, S transformation 1. INTRODUCTION Composite materials are an excellent choice for building many structural parts of modern aircraft and unmanned aerial vehicles (UAVs). Its light weight, high strength, and corrosion resistance enable engineers to achieve better performance, lower maintenance costs, and increase the life of air vehicles. The installation of Structural Health Monitoring (SHM) systems on the composite structures of UAVs is expected to improve their reliability when performing various tasks at distant locations. Most SHM systems use active or passive sensors for data collection and determine the health status of structures by processing the signals and interpreting the results. Piezoelectric elements are one of the most common actuators/sensors as they are cheap and create vibrations with electricity or vice versa [1]. In this study, the generation of Lamb waves with sweeping sine wave through the piezoelectric actuator was proposed to diagnose the condition of a short compound tube that could not be easily diagnosed with single frequency bursts and be able to work effectively at rates of low sampling. . Lamb waves are generated at the surface with proper excitation. Its spread changes even with slight surface defects. Therefore, many recent SHM methods evaluate 237 Lamb wave propagation using time-frequency domain methods [2,3,4]. The short time Fourier transform (stft), the wavelet transform (wt) and the S transform (st) are examples of algorithms used for this purpose. In this study, Lamb waves were generated using a sweeping sine wave. The S transformation was used to extract and encode features indicating the condition of the structure. A backpropagation neural network was used to classify the cases. The proposed approach was implemented to develop a minimal SHM system for a small four-rotor helicopter, DraganFlyer V ti Pro. The validation experiment was carried out on the carbon fiber tubes of the helicopter's main frame. In order not to destroy the tubes, the signals were collected using free tubes and tubes with a clamp attached to different locations. Analysis of the experimental data showed that the characteristics of the most significant peaks of the frequency responses varied when the clamps were placed at different locations and tightened at various levels. The results also indicate that the proposed method was a suitable approach for the detection of structural and assembly problems of the tubes. 2. THEORETICAL BACKGROUND The Fast Fourier Transform (FFT) is an efficient algorithm for transforming the time domain into the frequency domain. The frequency domain characteristics of the signals can be easily extracted and analyzed after FFT [5]. The frequency domain representation has no information about changes in the time domain. In many applications, such as speech recognition and SHM, time-frequency representation (TFR) is desirable [6]. The short time Fourier transform (STFT) [7], the wavelet transform (WT) [8] and the S transform (ST) [9] can be used to study the time-varying characteristics of spectral information. The short time Fourier transform includes time domain information by computing the Fourier transform of short periods of time defined by a window function. As the window slides along the time axis, the Fourier transform of the entire time series is obtained piece by piece. Assembly of these parts

274represents the characteristics of the signal. The STFT can be written as: STFT( τ, f ) = + h( t) w( τ t)exp( i2πft) dt (1) Where h(t) is the time series, w is the window function, and τ is the position of the window along the time axis. Optimizing the window width determines the frequency compromise and temporal resolution of the analysis. A wide window provides relatively better frequency resolution, while a narrow window provides better time resolution [3]. Since the fixed window size limits both frequency and time resolution, WT, ST, and other Fourier-derived methods have been developed to adapt the window width as a function of frequency to achieve optimal resolution. These multiple resolution algorithms can achieve good time resolution at high frequencies and good frequency resolution at low frequencies [9]. Stockwell introduced the S transform as an extension of the continuous wavelet transform (CWT). It is defined as a CWT phase correction [6], S( τ, f ) = exp( i2πfτ ) W ( τ, d) Where, W ( τ, d) = + h( t) w( t τ, d ) dt (2) (3) is the CWT of a function h(t). Therefore, the transform S is defined as [6]: 2 2 ( τ t ) f S( τ, F) = + f h( t) e 2π 2 e i2πft dt (4) The transform S is based on a location mobile and scalable Gaussian window, and can provide frequency-dependent resolution while maintaining a direct relationship to the Fourier spectrum [6]. 3. SET UP OF THE EXPERIMENT The experiment was carried out on the carbon fiber tubes of the main structure of a small four-rotor helicopter, DraganFlyer V ti Pro. Figure 1 shows one of the tubes. The tube is 190mm long; its outer diameter is 5 mm and its inner diameter is 3 mm. Two identical piezoelectric elements were attached to the surface of opposite ends of the composite tubes. The piezoelectric elements have a diameter of 12 mm and a thickness of 0.6 mm. One piezoelectric element was used as an actuator and the other as a sensor. The actuator was excited with a sweeping sine wave. The sensor collected information from incoming surface waves and propagated along the surface of the tube. The swept sine wave was generated by the spectrum analyzer (Stanford Research Systems model SR780 2-channel network analyzer) in Figure 2. The frequency of the generated swept sine wave signal was between 1 Hz and 102.4 Hz. K Hz. In the experiments, the tube was left alone or a clamp was attached to study the influence of an obstacle disturbing surface waves. The frequency content of the signal acquired in the sensor was analyzed by the same spectrum analyzer. The same experimental procedure was repeated when the clamp was placed at 40mm, 80mm and 120mm from the actuator. Fig. 1 Carbon fiber tube with attached piezoelectric elements Fig. 2 Experiment setup 4. RESULTS The S-transform of the signals sampled in the sensor was calculated. Figure 3 shows the S-transforms of the free tube and the same tube when the clamp was placed at three different locations. In Figure 3, the time-frequency characteristics of the tube sensory signals under the above conditions are illustrated by contours. The contours represent certain high amplitude levels. Figure 3 indicated that the contours of the free tube and the clamped tube itself had very different characteristics. The free tube had the highest peaks as the clamps obstructed the propagation of surface waves and reduced their energies. Frequency (Hz) Frequency (Hz) x 10 4 S transform (free tube) Time (S) x x 10 4 a) Free tube S transform (clamped to the left) Time (S) x 10-3 b) Clamped to the left x x

275x S transform (fixed in the middle) x The time, frequency and amplitude of the most significant peaks were isolated and used to estimate the condition of the tubes. Fig. 5 illustrates the peaks along the time axis and Fig. 6 illustrates the peaks along the frequency axis. Frequency (Hz) Amplitude x 10-3 Free tube Clamped on the left Clamped in the middle Clamped on the right Peaks along the time axis Time (S) x 10-3 c) Clamped in the middle x 10 4 S-transform (Clamped on the right) x Time(s) x 10-3 Fig.5 Maximum amplitude along the time axis Frequency (Hz) Time(S) x 10-3 d) Clamped on the right Fig.3 S-transform of the sensory signals from excitation the signal was a sweeping sine wave, the peaks located on a diagonal line correlating the increase in dominant frequency with time. The level of the most significant peaks of the S-transform indicated the condition of the tube. Figure 4 compares the peak characteristics of the free tube and the clamped tube on the right. Frequency (Hz) S transform 12 x 104 Free tube clamped at the correct time(s) x 10-3 Fig.4 Peak contour Amplitude x 10-3 Peaks along the frequency axis Free tube clamped to the left Clamped in the center Clamped to the correct frequency (Hz) x 10 4 Fig. 6 Maximum amplitude along the frequency axis The encoded time, frequency and amplitude information extracted from the peaks of the S-transforms of the dynamic responses in different conditions was used to train neural networks. The inputs and outputs of the backpropagation neural network model are illustrated in Figure 7. The neural network had four inputs, four hidden nodes, and one output. The output was an index representing the state of the tube. It was a digital value when we wanted to separate free and clamped tubes. It had an analog value when we estimated the location of the clamp. The inputs were the amplitudes of the frequency response at four critical frequencies identified from the S-transform plots. These frequencies were 37, 50, 70, and 87 KHz. The signal amplitudes at these four frequencies were presented to the backpropagation-type neural network. The experiments were repeated with free tube and the same tube clamped at various locations. 16 of these cases were used for training while the other 16 cases were used for testing. 239

276Amplitude 1 Amplitude 2 Amplitude 3 Amplitude 4 Input layer Hidden layer Output layer Index Fig. 7 Backpropagation neural network model Training and testing were performed for two different purposes. First, the diagnostic method was aimed at separating the free and attached tubes. The index of the training data extracted from the free tube dynamic response was labeled 0 and all others were labeled 1. The neural network rapidly converged during training and the accuracy of the diagnostic system estimate was 88%. . Of the 16 test cases, only 2 cases were incorrectly identified. Second, the neural network estimated the location of the clamp. The training and testing process was the same as above except that the training data index was labeled 2, 3, or 4 depending on the location of the clamps (2=left, 3=middle, and right=4). After the training process, the neural network estimated the tube health index with 38% accuracy on the test cases. Of the 16 test cases, only 6 were classified correctly. Based on the results, the proposed method can be used with confidence to determine the existence of surface defects or not in a tube. If the clamps used to hold components on a tube were to loosen, it would be possible to detect the problem. The confidence level would be much lower when estimating the location of the obstruction. 7. REFERENCES [1] D. P. Garg, M. A. Zikry and G. L. Anderson, Current and potential future research activities in adaptive structures: an ARO perspective, Smart Materials and Structures, No.10, 2001, pp [2] Y. Lu, X Wang, J. Tang, and Y. Ding, Damage Detection Using Piezoelectric Transducers and the Lamb Wave Method: II. Robust and Quantitative Decision Making, Smart Materials and Structures, No. 7, 2008, pp. [3] M. Niethammer, L. J. Jacobs, J. M. Qu, and J. Jarzynski, Time-Frequency Representations of Lamb Waves, Acoustical Society of America, Vol. 109, No.5, 2001, pp. [4] Kessler S.S., Spearing, S.M. and C. Soutis. "Damage detection in composite materials using Lamb Wave methods". Proceedings of the American Composites Society, September 9-12, 2001, Blacksburg, VA. [5] E. P. Carden and P. Fanning, Vibration-Based Condition Monitoring: A Review, Structure Health Monitoring, Vol.3, No.4, 2004, pp [6] R.G. Stockwell, L. Mansinha and R. P. Lowe, Localization of the Complex Spectrum: The S Transorm, IEEE Transactions on Signal Processing, Vol.44, No. 4, 1996, pp [7] S. Hurlebaus, M. Niethammer, L. J. Jacobs and C. Valle, Automated Methodology for Locating Notches with Lamb Waves, Acoustical Society of America, Vol. 2, no. 4, 2001, pp. [8] S. Legendre, D. Massicotte, J. Goyette, and T. K. Bose, Wavetransform-Based Analysis Method for Lamb-Wave Ultrasonic NDE Signals, IEEE Transactions on Instrumentation and Measurement, Vol. 49, No. 3, 2000, pp [9] X. Wang and I. N. Tansel, Modeling the Propagation of Lamb Waves using a Genetic Algorithm and S- transforming, Structural Health Monitoring, Vol.6, No.1, 2007, pp CONCLUSIONS In this study, the feasibility of using the sweeping sine wave to detect defects and assess whether the clamps were firmly attached to the tubes was tested. For the defect detection evaluation, a clamp was attached to a tube. This approach allowed us to repeat the experiments hundreds of times in different clamping and tightening positions without damaging the tube. The study indicated that swept sine wave excitations can be used to detect defects that perturb the propagation of Lamb waves or attached clamps. The characteristics of the S transforms of the waves propagated to the sensor indicated the obstructions that influenced the propagation of the surface waves. These obstacles can be defects or clamps. The encoded signal parameters were classified using backpropagation neural networks. The neural network accurately detected the existence of the obstacle. However, the location estimate was slightly compromised. 6. ACKNOWLEDGMENTS The authors wish to thank the Graduate School of Florida International University for providing the Dissertation Year Scholarship and Teaching Assistant Scholarships. 240

277Electromechanical Behavior of CNT Nanocomposites Yves Ngabonziza* Department of Mathematics, Engineering, and Computer Science LAGCC City University of New York Long Island City, NY * Jackie Li Department of Mechanical Engineering The City College of City University of New York New York, NY ABSTRACT Electrical This article studies the strength responses of multi-walled carbon nanotube (MWCNT) reinforced polypropylene (PP) nanocomposites under mechanical tensile loading. A standard tensile test was performed while electrical resistance was measured using the 2-probe method. From our previous work on CNT/PP nanocomposites, the percolation threshold of electrical conductivity is around 3.8 wt% CNT. The influence of this percolation threshold on the electrical resistance on the mechanical load was investigated. The results will be discussed and compared. INTRODUCTION It is well known that polymers are naturally insulating; Combined with CNTs, they become conductive and make them even more attractive due to this additional property in addition to their other interesting properties such as light weight, high strength, machinability, and optical properties, among others. In our previous work [1], we have shown that PP-MWCNT composites produced by injection molding had an electrical conductivity percolation threshold of around 3.8 wt% CNT. During the measurement of electrical resistance, two techniques can be used. The first, called the two-point probe technique, is an electrical potential method, consisting of two single-ended electrodes attached to the surface of the conductive structure. A DC or AC current source is then applied across the two electrodes and the resulting voltage across the same electrodes is measured. The electrical resistance between these two electrodes is then calculated based on Ohm's law. The second, called the four-point probe technique, is an electrical impedance method, which uses separate pairs of electrodes to generate current and sense voltage. That is, the outer and inner terminals of the electrodes are used as current and voltage contacts, respectively. The key advantage of the four-point probe technique over the traditional two-point probe technique is that by separating the current source and voltage sense terminals, the four-point probe technique eliminates wiring impedance contributions. and contact resistances. On the other hand, when space is limited, such as 1-D strip samples commonly used in laboratory tests, the two-point probe technique can be more conveniently applied than the four-point probe technique. . Several investigators have been interested in the use of piezorresistivity and electrical conductivity for sensing purposes [2-4]. Most of those studies were carried out on carbon fiber composites and proved to be efficient. Due to carbon fiber's inherent nature of piezorresistivity and electrical conductivity ( = 510 4 Sm -1 ), this technique has been studied by many researchers for the autodetection of carbon-based compounds. Among others, Chung and his associates [5-15] have carried out extensive research in the area of ​​self-sensing/self-monitoring/self-diagnostics of carbon-based systems. Pham et al. [16] developed carbon nanotube polymer composite films that can be used as strain sensors with customized sensitivity. Films were made by melt processing or solution casting of poly(methyl methacrylate) (PMMA) with MWNT. However, few works have examined the effect of mechanical loading on the electrical conductivity of polymeric compounds. For the PP nanocomposites used in this study, the melt-blended CNT-PP concentrate was diluted with neat PP in the injection molding process. This downfall of pelletized masterbatches is a very common practice for handling fine particles during injection molding. This article contains an investigation of the influence of mechanical loading on the electrical conductivity of CNT-PP nanocomposites. As is commonly known, polymers are natural insulators; Combined with CNTs, they become conductive and make them even more attractive due to this additional property in addition to their other interesting properties such as light weight, high strength, machinability, and optical properties, among others. On top of that, CNT-PP nanocomposites would be a good candidate for strain-sensing capabilities. In this article, the effect of mechanical loading on electrical conductivity will be investigated. 241

278EXPERIMENT To make the nanocomposites, a CNT-PP masterbatch (concentrate) was dispersed in polypropylene base material using a 55-ton reciprocating screw injection molding machine (Cincinnati Milacron-Fanuc, model: Robo 55R-22). BP Amoco's Acclear 8449) was a random copolymer with a melt index of 12 g/10 min. The MWNT master batch was obtained from Hyperion Catalysis (grade: MB) and contained approximately 20% by weight of MWNT. For mechanical loading, mechanical tensile tests were performed using an INSTRON universal testing machine; Strain was recorded with an extensometer. A mechanical load was applied at a standard rate of 0.01/min. Again, for each sample type, three specimens were tested and compared for consistency. The standard 2-probe method was used to measure the electrical resistance of the nanocomposites. Since the percolation threshold has been assessed to be around 4 wt% CNT, only samples with CNT weight percentages above the percolation threshold were analysed. The specimens tested had CNT weight percentages of 5, 7, 10, and 12. Once the specimen is ready, electrical current is applied to the specimen through an electrical circuit powered by a DC power source; the DC source was used for testing because of its simplicity. However, it is worth noting that in actual practice, the AC source at 1 khz is commonly used to avoid inaccuracy caused by bias. Then, the electrical data was recorded using the Labview software. RESULTS AND DISCUSSION During mechanical loading, the electrical resistance of the system was obtained from the measurement of 2 probes as described above; at the same time, stress-strain curves were obtained from the tensile test data. Figure 2 shows the stress-strain of the three MWCNT-PP compounds with 5% by weight of CNT. Figure 2: Stress-strain curves for specimens composed of 5 wt% MWCNT-PP The change in strength was also measured using the 2-probe method. The change in resistance was not clearly pronounced. This could be due to the fact that at 5% CNT, the conductivity of the composite sample is still at the lowest level to largely detect the electrical resistance change. Figure 3 below shows the change in electrical resistance due to the change in voltage. Figure 1: Sample on the INSTRON machine with the extensometer and electrical probes. Figure 3: Electrical resistance change due to strain change for 5 wt% MWCNT-PP composite specimens 242

279The same tests that were performed on the PP-MWCNT composite samples with 5 wt% CNT were also performed on the 7% CNT composite samples. Figure 4 shows the stress-strain curves for the composite specimens with 7% CNT. Figure 6: Stress-strain curves for 10 wt% MWCNT-PP composite samples Figure 4: Stress-strain curves for 7 wt% MWCNT-PP composite samples The electrical resistance change was most pronounced in this case although an obvious pattern present was not removed. It is illustrated in Figure 5 below. Figure 7: Electrical resistance change due to strain change for 10 wt% MWCNT-PP composite specimens In figure 7 above, we can realize the increased sensitivity to strain change by electrical electrodes. It can be noted that the change has a clear pattern compared to the CNT percentages of 5% and 7%. Figure 5: Electrical resistance change due to strain change for 7% wt MWCNT-PP composite specimens For PP nanocomposites with 10% and 12% CNT weight percentages, the sensitivity to resistance change to mechanical loading was more pronounced and clear patterns could be observed. Figures 6 and 7 show the stress-strain and strength change curves for the composite specimens with 10% CNT respectively. The same increase in sensitivity was also present with the PP nanocomposites with 12 wt% CNT. Figure 8 shows the stress-strain curves for the PP-MWCNT composite materials with 12% by weight. 243

280CONCLUSION CNT-PP nanocomposites were produced by injection molding. Composite specimens showed sensitivity to change in electrical resistance to change in voltage. The level of sensitivity varies with the % by weight of CNT; it is less pronounced in compounds with 5 wt% and 7 wt% CNT. The reason could be that the above CNT weight contents are close to the percolation threshold of about 3.8% by weight. The sensitivity became more pronounced for the nanocomposites with 10 wt% and 12 wt% CNT. The above results are a promising use of PP-MWCNT compounds for screening purposes. This additional property will add to the already existing advantageous range of potential applications. Figure 8: Stress-strain curves for 12 wt% MWCNT-PP composite specimens Figure 9 illustrates the sensitivity to strain change for PP nanocomposites with 12 wt% CNT. Similar to the 10 wt% MWCNT-PP composite samples, the sensitivity to resistance change is more pronounced. REFERENCES [1] Ngabonziza, Y., Li, J. and Barry, C.F Electrical conductivity and elastic properties of MWCNT-PP nanocomposites. Proceedings of the ASME International Mechanical Engineering Congress and Exhibition. [2] Chung, D.D.L. and Wang, S Self-control of damages and deformations in carbon fiber polymeric matrix structural composites by electrical resistance measurement. Polymers and Polymer Composites, 11(7): [3] Todoroki, A., Tanaka, M. and Shimamura, Y Electrical resistance change method to monitor delamination of CFRP laminates: Effect of electrode spacing. Composites Science and Technology, 65: [4] Shen, L., Li, J., Liaw, B.M., Delale, F., and Chung, J.H Modeling and analysis of electrical resistance measurement of carbon fiber polymer matrix composites . Composites Science and Technology, 67: [5] Chung, D.D.L Self-control structural materials. Reports: A Review Journal, Materials Science and Engineering, R22: Figure 9: Electrical resistance change due to strain change for 12%MWCNT-PP composite specimens [6] Wang, S. and Chung, D.D.L Strain and damage self-control by a carbon-carbon compound. Carbon, 35(5): [7] Chung, D.D.L Thermal Analysis of Carbon Fiber Polymer Matrix Composites by Electrical Resistance Measurement. Thermochimica Acta, 364: [8] Mei, Z. and Chung, D.D.L Detachment of thermoplastic compounds induced by thermal stress, studied by measuring electrical contact resistance. International Journal of Adhesion and Adhesives, 20:

281[9] Wang, S. and Chung, D.D.L Apparent negative electrical resistance in carbon fiber composites. Composites: Part B, 30: [10] Wang, S., Mei, Z., and Chung, D.D.L Interlaminar damage in carbon fiber polymer matrix composites, studied by electrical resistance measurement. International Journal of Adhesion and Adhesives, 21: [11] Wang, X. and Chung, D.D.L Self-control of fatigue damage and dynamic stress in carbon fiber polymer matrix composites. Composites: Part B, 29B: [12] Wang, X. and Chung, D.D.L Short carbon fiber reinforced epoxy coating as a piezoresistive strain sensor for cement mottling. Sensors and Actuators A, 71: [13] Wen, S. and Chung, D.D.L Piezoresistivity in continuous carbon fiber cement matrix composite. Cement and Concrete Research, 29: [14] Chen, P.-W. and Chung, D.D.L Concrete as a new strain/stress sensor. Compounds: Part B, 27B: [15] Mei, Z. and Chung, D.D.L. Effects of temperature and stress at the interface between concrete and its carbon fiber epoxy matrix composite refurbishment, studied by measuring electrical resistance. Cement and Concrete Research, 30: [16] Pham,G. T., Park,Y., Liang, Z., Zhang, C., Wang, B Processing and patterning of conductive carbon nanotube/thermoplastic films for strain detection. Compounds: Part B, 39:

282Application of structural integrity assessment software Marko RAKIN Faculty of Technology and Metallurgy, University of Belgrade Belgrade, Serbia marko@tmf.bg.ac.rs Nenad GUBELJAK Faculty of Mechanical Engineering, University of Maribor Maribor, Slovenia Bojan MEDJO Faculty of Technology and Metallurgy, University of Belgrade Belgrade, Serbia Taško MANESKI Faculty of Mechanical Engineering, University of Belgrade Belgrade, Serbia Aleksandar SEDMAK Faculty of Mechanical Engineering, University of Belgrade Belgrade, Serbia ABSTRACT In this paper, a brief description of two projects related to structural integrity assessment, including online software based on the SINTAP procedure. An example of the use of the software is presented and the results obtained are compared with the experimental data and discussed. Keywords: structural integrity assessment, welded joint, crack initiation, fracture initiation INTRODUCTION Fracture mechanics, taking into account its theoretical and experimental techniques [1, 2], is a good basis for reliable integrity assessment procedures. structural. There are several procedures of this type, and they are already being used for many materials; however, they can still be improved and tuned for greater efficiency and lower design, construction, and maintenance costs. The approach used for SINTAP (Structural Integrity Assessment Procedure) [3] offers a very good basis for the software solution, one of which is developed through the MOSTIS (Mobile Structural Integrity Assessment System) project [4]. , as a segment of a system structural integrity assessment. Four standard evaluation levels of the SINTAP procedure (0, 1, 2 and 3) are used, ranging from simple but conservative approaches where data availability is limited, to more precise and complex approaches. A very important topic is the evaluation of structures composed of more than one material: welded joints. Results can be presented as Crack Driving Force (CDF) or Failure Assessment Diagrams (FAD). The crack driving force (for example, the applied J integral) can be plotted as a function of defect size for different applied loads, or as a function of load for different defect sizes and compared to the resistance of the material to fracture. In a DCP, an assessment is represented by a point or curve on a diagram and failure is judged by the position of the point or curve relative to a failure assessment line. The result of this procedure is the information on whether the real or postulated failure will cause the failure of the structure or if it is possible to continue with its exploitation. The fundamental principle is that failure occurs if the applied crack driving force exceeds the fracture strength of the material. Another step in the development of evaluation procedures was taken through the FITNET project [5], which included fatigue, corrosion and creep modules, in addition to the fracture procedures included in SINTAP. 246

283BACKGROUND The purpose of structural integrity assessment is to determine the significance, in terms of fracture and plastic collapse, of defects present in metal structures and components. The dimensions of the load or the defect can be varied, in order to check the possible increase in load and/or size of the defect that will lead to failure. It is important to note that this approach is not intended to replace existing methods, but rather to serve alongside them for the life of a structure. It can be used for evaluation at the design stage to specify material properties, design stresses, inspection procedures/intervals, and acceptance criteria. It can also be used for fitness-for-purpose assessment during manufacturing, against applied manufacturing standards. During the operation phase, it can be used to decide whether a structure or component is safe for continued use (i.e. it is safe to continue operation until a repair can be carried out in a controlled manner), despite detected failures or modified terms of service. Taking into account that the analytical and other methods used for the creation of the SINTAP/FITNET procedure (and therefore also the software that is part of the MOSTIS system) can give an estimate of the state of the structure, but cannot To provide more detailed information (eg stress or strain data at some important place in the structure), a new system is currently being developed: OLMOST (Online Monitoring of Structures and Fatigue) [6]. It is an integrated hardware and software solution for online and on-site measurement of the state of the structure during its useful life, in order to prevent failures due to faults and inadequate design. Configured as an expert system for online monitoring and automatic analysis of measured data, including automatic warning signals to the supervisor, it is based on a database of materials and the stress-strain behavior of components. The state of deformation of the structure will be evaluated by optical stereometric measurement of the surface at the critical points and/or some other measurement methodologies, depending on the type of construction, operating conditions, safety requirements, etc. Considering that failures can change the behavior of the structure, changes in the state of the structure can be used to indicate some defects and anomalies that cannot be directly measured. Wherever possible, wireless measurement shall be applied; the sensors will connect wirelessly to the processor unit and the signals will be assembled on a mobile computing device. This device can be connected to the Internet through the GSM mobile network and then to the server with the master program for fault identification. In case of overload or lack of input parameters, it will provide different warning signals, depending on the measured data. The stress-strain behavior of the structure or any of its components will be evaluated by finite element (FE) numerical modeling. The comparison between the numerical results and the measured strains (or other appropriate quantities) provides relevant information about the stress state of the component and the load. The results of the critical component in regular service will be used to establish an acceptable loading window. If the deformation state does not fall within this window, the expert system will provide the decision to safely shut down or stop the use of the structure. Reliable estimation of the defect size and its position in the component can be made by comparing the measured deformation behavior with the results of the numerical model with the assumed defect size and position. Another important use of this new system is to recover the load history and the evaluation of the accumulated damages that occur during the useful life of the structure. The main objective is the analysis of evaluation of failures on the site and the estimation of the remaining useful life of the damaged structure, to improve the planning of repairs and to optimize the life cycle of the components, with the possibility of providing appropriate commands for control the equipment. APPLIED PROCEDURE An example of an evaluation procedure using the MOSTIS software on a single edge notch welded (SENB) sample is presented. The base metal (BM) is NIOMOL 490 High Strength Low Alloy (HSLA) steel. The fatigue precrack is found in the weld metal (WM), along the axis of symmetry of the joint (half sample shown). in figure 1). The properties of the base metal and the weld metal are given in Table 1. From these data it can be seen that the analyzed joint is exceeded, taking into account that the mismatch ratio (ratio between the yield strength of the weld metal and the base metal) is greater than 1. As already mentioned, the SINTAP/FITNET procedures can take into account this difference in material properties in the welded joint. The behavior of the joint under external load is analyzed using the SINTAP level 2 procedure, and the results are presented in the FAD diagram (Fig. 2). This diagram represents the change of state of the structure 247

284during the load increase (straight line) and the critical state of the structure (failure evaluation line). The value K r, ordinate of the diagram, represents the relationship between the applied voltage intensity factor K I and the critical voltage intensity factor K Ic. The abscissa L r is the ratio between the applied load and the plastic limit load of the structure. Table 1 Material properties WM BM E [GPa] R p0.2 [MPa] R m [MPa] of 5 kn), the point moves in the direction marked by the arrow. The crack length is kept constant during the calculations (and equal to the initial fatigue length prior to the crack), taking into account that the object of this analysis is the initiation of crack growth. The influence of the joint width on the initiation of the fracture is analyzed in [7], using the local fracture approach - Gurson-Tvergaard-Needleman (GTN) model. FINAL RESULTS AND OBSERVATIONS It can be seen that increasing the load from 10 knots (point A) to 70 knots (point C) is moving the structure towards the critical state and, finally, to the critical state (point B). Each of the points on the straight line corresponds to a specified load level, and as the load increases (with increment) Fig. 1 Dimensions of the SENB sample and welded joint (2H = 6 mm) Fig. 2 FAD diagram for welded sample SENB 248

285The moment of initiation of crack growth is determined experimentally and the value of the corresponding measured force (F i ) is compared with the results obtained using MOSTIS. It turns out that this load is close to critical according to the failure evaluation line, since the point that corresponds to F i in the FAD diagram belongs to the critical state of the structure. Therefore, the state of the structure at the initiation of the fracture is correctly predicted, and the evaluation with MOSTIS is on the safe side compared to the experimental investigations, establishing a lower loading level as critical. Additionally, the value corresponding to the experimentally determined crack initiation was varied, in order to verify the sensitivity of the evaluation to the variation of the experimental data; The interval 0.95 F i < F < 1.05 F i was used. It can be seen (Fig. 2) that decreasing this value by an amount of 5% brings the fitting point fairly close to the failure assessment line. Considering that the crack initiation is difficult to determine exactly, it would be preferable to use a certain factor of safety to ensure that the failure assessment is safe and does not overestimate the bearing capacity of the structure. The higher level of the SINTAP procedure can be expected to give less conservative results. However, in the example presented only basic material data is used, because it is generally known for most of the materials in exploitation. ACKNOWLEDGMENTS The authors are grateful for the financial support of the Ministry of Science of Serbia within the framework of the Eureka E! 3927 Mobile Structures Integrity System - MOSTIS and E! 5348 Online monitoring of structures and fatigue - REFERENCES OLMOST [1] T.L. Anderson, Fracture mechanics. London: CRC Press, [2] K.H. Schwalbe, Basic Engineering Methods of Fracture and Fatigue Mechanics. Geesthacht: GKSS Research Center, [3] SINTAP: Structural Integrity Assessment Procedure, EU Project BE-1462, Brite Euram Program, [4] MOSTIS: Movable Structures Integrity System, Eureka Project E! [5] FITNET: European Fitness-for-Service Network, [6] OLMOST: Online monitoring of structures and fatigue, Eureka project E! [7] M. Rakin, N. Gubeljak, M. Dobrojević, A. Sedmak, Modeling of ductile fracture initiation in welded joint of unequal strength. Engineering Fracture Mechanics, Vol. 75, 2008, pp.

286Pulsed Atmospheric Pressure Plasma System Applied to Surface Treatment of PCBs Fuhliang WEN, Jhenyuan LIN Department of Computer Aided and Mechanical Engineering/Graduate Institute of Automation and Mechatronics, St. John's University, Tamsui, Taipei County 25135, Taiwan and Hungjiun WEN, Kuo-Hwa CHANG Department of Industrial and Systems Engineering, Chung Yuan Christian University, Chungli, Taoyuan County 32023, Taiwan ABSTRACT Since a Pulsed Atmospheric Pressure Plasma (PAPPS) system could be applied to surface modification for products with complex geometry in a non-vacuum environment, it definitely saves a lot of operation cost. p app has unique features in lower surface charge accumulation and higher processing repeatability due to smart tuning technique pulsed parameter tuning in manufacturing recipes, which consisted of adapted DC power supply, a pulsed high voltage controller and a high frequency pulsed transformer. Based on 2kw max output power and 20kv limited working voltage with bipolar pulsed output mode, through communication interface, PC remote control system was built to set pulsed parameters and monitor the variation of electrical power outputs. This low-temperature plasma jet would be used as a tool for cleaning PCB surfaces and activation in an atmospheric chamber. Experimental results have shown that the PAPPS is capable of delivering a stable plasma jet while at disconnected power supply, pulsed parameters, and air pressures. After surface treatment with plasma, the hydrophilic characteristic of the printing circuit board (PCB) has been improved, which means that the ability of the sticky substance on the PCB surface is improved for component soldering and the rate of performance. Keywords: Atmospheric Pressure Plasma, Pulsed Parameter, Surface Modification, Plasma Jet, PCB Treatment Plasma Delivery System (PAPPS) has unique characteristics in lower surface charge buildup and higher processing repeatability due to Pulsed parameter tuning from a smart tuning technique in manufacturing recipes[3] . Therefore, the PAPPS was adopted to remove the contaminants and improve the hydrophilic characteristic on the PCB surface by plasma ion bombardment. In addition, the adhesion of metal coating or surface mount on PCBs would be improved and the use of organic solution for cleaning would be reduced. There are two main sections described in this document. In the first section, the practical design for a high-voltage pulsed power supply assembly was carried out, which involved a high-frequency pulsed transformer that was adapted to a c power supply and a pulsed power controller. Next, under appropriate dry pressurized air and pulsed high voltage supply, the atmospheric pressure plasma jet was generated through a plasma jet nozzle. This low-temperature plasma jet would be used as a surface treatment tool for PCBs in an atmospheric chamber. Through instruments measuring contact angles and 3D surface profiles, the surface characteristics of PCBs after surface treatment were detected. 2. PRESSURE PLASMA SYSTEM SETUP 1. INTRODUCTION In the electronics industry, printing circuit boards (PCBs) have generally been adopted as the substrate for mounting integrated circuits, transistors, resistors and other electrical components and wiring connections. between each part. PCBs have developed from a single layer structure to a multi-layer structure where excellent adhesive quality is needed for surface mounting. Therefore, plasma is used for surface modification to improve surface function and quality or yield rates of products [1, 2]. Since atmospheric pressure plasma could be applied to the surface modification of products with complex geometry in a non-vacuum environment, it definitely saves a lot of operation cost. Pulsed atmospheric pressure Fig. 1 Expected breakdown voltage at the cylindrical electrode of the plasma nozzle based on the Paschen curve[1]. 250

287Pressure regulating valve and air filter DC power supply High-voltage air extraction cable Substrate Plasma injection nozzle with cylindrical electrode High-frequency pulsed transformer Pulsed power controller High-voltage detector Voltage probe Current probe TDS3054B Oscilloscope GPIB Interface Board Compressed Air LabVIEW System Fig. 2 Pulsed Atmosphere Pressure Plasma Setup and LabVIEW Monitoring System Due to the high breakdown voltage of plasma at atmospheric pressure, there are intense collisions between air molecules or ion particles to easily form the electric arc between the electrodes of the plasma injection nozzle. There is a high possibility of damaging the electrodes or the treated surface on the substrate. Therefore, according to Paschen's law[4, 5], the designated breakdown voltage V b for the pulsed power supply would be calculated as equation (1), V b pd B( p d) B Apd ln[ A( p d)] ln [ln( 1 ln ln )] where p is the air pressure, d shows the distance between the electrodes, and the effect shows the plasma-averaged number of secondary electrons released from the cathode surface after positive ions are produced each time. Furthermore, a and b represent the ionization collision coefficient, the constant effect of which depends on the various characteristics of the air, but can be obtained from relative experiments. They have the relationship as equation (2) described as, (1) B paexp (2) E p where (=v/d) is the electric field applied in voltage v to the distance d between electrodes. Actually, the breakdown voltage v b is a function of the multiplication of the just pressure p and the electrode distance d, that is, V b = f (pd). Based on the above statement, we always follow the Paschen curve, as shown in Fig. 1, to determine the applied voltage. An atmospheric pressure pulsed plasma system consisted of a pulsed high voltage power supply, air tubing and supply equipment, and a LabVIEW monitoring platform for electrical power, as shown in Fig. 2. The high voltage pulsed power supply provided adequate AC. high voltage field through a DC power supply controlled by a pulsed power controller and regulated by a high frequency pulsed transformer. Its electrical specifications are a maximum output power of 2kw, operating frequencies within 25khz and a working voltage limited to 20kv with bipolar pulse output mode. Through the GPIB and RS232 communication interface, the PC remote control system was built to set the pulse parameters and to monitor the variation of electric power. exits through 251

288LabVIEW platform, which also consisted of an oscilloscope, a high-voltage differential probe, a current probe, and a high-voltage detector to record the output voltage at the plasma nozzle electrode after the high-frequency pulse transformer. In addition, the air supply was a simple piping system adopting the pump, piping, and container for compressed air generation, plus some adjustable valves and appropriate air filters. surface treatment, PCB surface roughness has obvious modification, as shown in Fig. 4, where the uneven surface is shown. 3. EXPERIMENTAL DESIGN OF PLASMA FOR PCB SURFACE TREATMENT In the last decade, pulsed plasma has made further progress in surface coating and modification technique, due to its intensive plasma energy tuned by electrical parameters such as power output. , frequency, duty cycle, etc. [6] . In addition, it has a unique feature for the suppressed arc function. However, there are many processing factors involved in modifying the PCB surface to improve adhesive. We adopt the design of experiments (DOE) method to investigate the correlative model in interdisciplinary knowledge such as plasma surface processing. Using the categorized technique for environmental factors and controlled factors, it was possible to test the interactive relationship between these factors for better operating parameters. Therefore, multi-factor experiments would be identified as the optimal combination for PCB surface modification to improve hydrophilic or adhesive characteristic. 3.1 Determination of DOE parameter In order to find the factors influenced by PCB surface modification, we performed the DOE method to determine their correlation between the factors. Basically, the conventional DOE procedure should be followed as: (1) collect the possible factors, such as processing time, airflow rate and category, plasma jet distance, frequency, duty cycle, power electrical (including voltage and current) and substrate materials. ; (2) classified these influenced factors according to their characteristics and effects on the experimental results in percentage; (3) determine the reactive levels for the factors; (4) Set the range of levels. Through the above procedure, we finally decided that parameters including plasma treatment time, duty cycle, power, and ignition voltage at the injection nozzle [7, 8] would be the main influencing factors. 3.2 Plasma Surface Experiment and Measurement According to the default processing parameters, the plasma jet stream would perform ion bombardment on a 40mm 50mm PCB to modify the surface after alcohol solution cleaning methyl. In addition, all the treated parts would measure the contact angle through the slope drop for surface tension, as well as detected by a 3d surface profile measuring instrument. The hydrophilic or hydrophobic characteristic of the modified PCB surface was identified by contact angles, as shown in Fig. 3 after papp bombardment. 12 seconds Fig. 4 Surface profile after 12 seconds of pulsed ion bombardment on the PCB surface with 35% duty cycle near 11.6 khz operating frequency and air pressure of 0.9 kg/cm RESULTS AND DISCUSSION The pulsed duty cycle is inversely proportional to the voltage applied across the electrodes of the plasma nozzle, as shown in Fig. 5. Since the duty cycle represents the input power of electrical power in each valid period, the larger duty cycle of the electrical power input has more power supplied to make up for the loss of power from the plasma when ion bombardment continuously impacts the PCB surface and causes consumption of ions. Conversely, a lower duty cycle requires a higher applied voltage across the electrodes to provide the instantaneous power reserve to maintain plasma state during ion bombardment in 16 to 20 s. However, there are two different slope trends where the 20 is more sensitive than the 16 s pulsed operation parameter. Otherwise, both of them have the best efficiency or lowest power consumption within the range of 40% ~ 48% duty cycle, as shown in Fig. 6. Although, until now, we didn't have the obvious proof, perhaps the material The properties of the substrate and the categories of supplied air would be the domain factors that 252

289more research is needed to prove this conjecture Voltage at plasma nozzle [kv] Ton=16 us Ton=20 us Drop contact angle (degrees) Pulsed duty cycle [%] Fig. 5 Pulsed duty cycle with respect to the voltage supplied at the plasma jet nozzle based on the different T over time Plasma treatment time (seconds) Fig. 7 Effect of plasma treatment time on contact angle with slope drop mode. Power consumption [watts] Ton=16 us Ton=20 us Surface roughness [um] Roughness peak: Z (high) Average roughness: Ra Roughness valley: Z (low) Pulsed duty cycle [%] Figures 7 and 8 illustrate the plasma ion bombardment treatment time performed on the contact angle and surface roughness. When the longer duration under plasma ion bombardment, the drop contact angle is reduced to less than 50; that is, the modified surface has a more hydrophilic characteristic. However, in the plasma treatment time between 4 seconds and 10 seconds, there is no obvious improvement in contact angles, except for 12 seconds or more, as shown in Figure 7. From another point of view, the average roughness curve, Ra, could be Plasma treatment time [second] Fig. .8 Effect of plasma treatment time on surface roughness. effectively changed by plasma treatment time, as shown in Fig. 8. Plasma ion bombardment is positively affected on PCB surface roughness. From this figure, we understand that the improved hydrophilicity could be the difference in the Z roughness peak (high) and the Z roughness valley (low), but not in the inr a. 5. CONCLUSIONS 253

290Experimental results have shown that atmospheric pressure pulsed plasma delivery system can deliver stable plasma current while tunable DC power supply, pulsed parameters and air pressures. One of the optimal parameters includes the operating frequency of 25 khz and a duty cycle of 46, an air pressure of 0.4 kg cm 2 and an electric power of 0.77 kw, resulting in contact angles droplet shapes ranging from 65.1 to 34.1 after a 12 second plasma surface treatment. In addition, the average surface roughness (Ra) went from 1.19 m levitated to 2.17 m after surface modification in 10 seconds. It is an obvious proof of the useful ability of the PAPPS system in cleaning and activating PCB surfaces. After plasma surface treatment, the hydrophilic characteristic of PCB boards has been improved, which means that the ability of sticky substance on the PCB surface is improved for soldering or component assembly. 6. REFERENCES [1] Andreas Schutze, James Y. Jeong, Steven E. Babayan, Jaeyoung Park, Gary S. Selwyn, and Robert F. Hicks, The Atmospheric Pressure Plasma Jet: Review and Comparison with Other Plasma Sources, IEEE Transactions on plasmascience, vol.26,no.6,1998. [2] N St J Braithwaite, An Introduction to Gas Discharges, Scientific Technology of Plasma Sources, Vol. 9, 2000, pp. [3] E.Panousis,F.Clement,J.F.Loiseau,N.Spyrou,B.Held,J. Larrieu, E. Lecoq and C. Guimon, Surface Treatment of Titanium Alloys by Atmospheric Plasma Jetting under Pulsed Nitrogen Discharge Conditions, Surface and Coating Technology, Vol. 201, Issue 16-17, 2007, pp [4] Claire Tendero, Christelle Tixier, Pascal Tristant, Jean Desmaison and Philippe Leprince, Spectrochimica Acta, part B 61, 2006, pp [5] H.E. Wagnera, R. Brandenburga, K.V. Kozlovb, A. Sonnenfeldc, P. Michela and J.F. Behnke, The Barrier Discharge: Basic Properties and Applications to Surface Treatment, VacuumVol.71,2003,pp [6] Gunter Mark, Tutorial: PPST 2003, June 9-10, Tokyo Japan, [7] Plasma.html [8] Edward V. Barnat and Toh-Ming Lu, Pulsed and Pulsed Bias Sputtering Principles and Applications, Kluwer Academic Publishers,

291Understanding the disadvantages of technological support to promote the use of IT among small Mexican companies Pilar ARROYO Department of Administration Sciences and Marketing, Tecnológico de Monterrey campus Toluca Toluca, México, México and Victoria EROSA Graduate School of Business Administration, University Autonomous of Tamaulipas UAM campus, Commerce and Administration Victoria. Ciudad Victoria, Tamps. México ABSTRACT This paper makes a diagnosis of the factors that determine the use of information and communication technologies (ICT) and information systems (IS) among small trade and service companies located in the center of a large city. mexican. The study uses a multiple case approach to rank companies in terms of their technology infrastructure and the scope of e-commerce applications. Once companies are ranked, their degree of IT usage is related to the following factors: IT innovation promoted by the CEO, IT knowledge sharing among employees, and technical support provided by consultants and system vendors. Case analysis shows that those companies that make full use of their technical infrastructure have CEOs/owners with better perceptions of the usefulness of Internet-based technologies, have established collaborative relationships with consultants and employees interested in using IT to facilitate and improve their tasks. Keywords: information and communication technologies, small businesses, knowledge sharing, technical support. inadequacy of existing technological needs; perceptions about insecurity and privacy (especially about online payments); lack of computer skills and inadequate technical support for end users; inadequate national policies that promote IT adoption among SMEs and a poor vision of the potential contribution of IT to the competitiveness of the company. So, the research question of this work is posed as: How important, with respect to the IT infrastructure, is the availability of computer skills, technical support and senior management for the effective use of Information Technologies and Communication (ICT) between small Mexican retailers and service companies? ? The organization of work is as follows. The first section analyzes the literature related to the factors that affect the use of IT, then the methodology followed to identify and classify companies with different levels of use and IT infrastructure is presented. The third section presents the analysis of the cases and identifies those factors that were decisive in taking advantage of the available computing resources. The final section draws conclusions, provides directions for future research, and discusses the limitations of the study. 1. INTRODUCTION Research claims that IT has the potential to support the competitiveness of small and medium-sized enterprises (SMEs) [5] [8], [9], [17]. However, authors such as Nieto and Fernández [13] and Taylor and Murphy [18] present data for European and US companies that indicate that SMEs are less committed to the digital economy than larger companies. Despite the advantages that Internet-based technologies represent for advancing electronic relationships with trading partners, improving information flows and customer service, and reducing marketing and order processing costs [ 9], the current literature shows that SMEs still have a limited strategic vision of e-commerce and e-commerce initiatives [5]. Then, IT-related decisions are driven by operational issues, such as an immediate perceived benefit or pressure from vendors and customers [9]. It has been argued that one of the main reasons for the low diffusion of IT among SMEs is their relative disadvantage to larger companies due to their limited capital resources, which leads to poor investments in IT infrastructure [10]. . However, other, even more important barriers have been identified. Among them are the beliefs about the 2. REVIEW OF THE LITERATURE The perceptions about the usefulness and ease of use proposed by the TAM model (Technology Acceptance Model) to explain the use of IT have been shown to be critical factors for the adoption of IT among SMEs [10] , [12], [18]. But according to Ndubisi and Jantan [12], the use of Information Systems (IS) is driven not only by these perceptions, but also by pre- and post-sale technical support provided by vendors and designers. The importance of acquiring external computer support and training has also been recognized by other authors such as Lin [11] in the context of SMEs in Taiwan and by Thong [19] in the case of small companies in Singapore. Caldeira and Ward [1] studied the influence of various factors in the internal context of the small business on the degree of success of IS/IT (measured as management's use of and satisfaction with current IS/IT). They conclude that the technical competencies of employees, the knowledge available and the attitudes of top management through IS/IT are more relevant to the use of IS/IT than financial resources. Those SMEs more concerned with developing internal technical competencies were more successful in adopting and obtaining benefits from IS/IT 255

292adoption that companies focused on the quality of software or systems purchased. The development of technical competencies at the organizational level requires individual skills, the exchange and application of knowledge to solve or improve work tasks. SMEs are generally unable to hire IS/IT professionals; however, technical skills can be developed internally through more informal training mechanisms such as mentoring and interpersonal exchanges of experience. The acquisition of external knowledge together with the diffusion and application of such knowledge (identified as absorptive capacities) to gain competitive advantage is less recognized among SMEs [16]. Small businesses often do not have a systematic approach to developing, sharing, or exploiting knowledge. According to Desouza and Awazu [4], in SMEs the transfer of knowledge from the CEO to employees and between employees occurs mainly through socialization. This means that knowledge is shared and immediately put to use through informal mechanisms such as face-to-face interactions, shadowing, mentoring, and job rotation to gain experience. Facilitating knowledge sharing is relevant to motivating and stimulating employees to learn from each other and to clearly communicate the reasons for and value of knowledge/experience exchanges. The provision of management directions and infrastructure to facilitate knowledge sharing among employees and to learn from customers and suppliers influences the use of IS/IT among SMEs [2]. Since SME top managers are often directly involved in most strategic and tactical decisions, they have a strong influence on IS/IT adoption [1], [7], [14]. If senior managers are convinced of the benefits and potential, promote their use and encourage employees to propose new applications to current systems. Conversely, when top management is not committed to IS/IT or has unrealistic expectations of what the technology can do, diffusion is limited and restricted to specific applications suggested or required by trading partners (for example, online payment). line imposed by customers). Since the theoretical models used to identify the internal factors that promote the adoption and use of IT among SMEs have been validated mainly in developed countries, it is important to obtain additional information on the factors that influence the use of IT among SMEs. Mexican SMEs to be able to make suggestions in this regard. how to get the most out of today's IT infrastructure. This work first makes a diagnosis of current IS/ICT applications among local SMEs and then identifies the effect of CEO support, technical support and knowledge sharing on the exploitation of available technologies. 3. METHODOLOGY The study was conducted among micro and small businesses located in the downtown area of ​​one of the largest and most dynamic cities in central Mexico. The interest in these companies in particular is due to the following two reasons: a) Small companies, and particularly those that operate in the service and retail sectors, constitute the main economic base in the center of large Mexican cities. These companies are usually well-established family businesses that have been operating for several years in the same geographic area and provide employment for a significant percentage of the local population. b) Small businesses are subject to strong survival pressure due to the location of new shopping centers on the outskirts of large Mexican cities. Large department stores (anchor stores), retail stores, and service chains in these malls have taken over the market previously served by these small downtown businesses. These large retailers and service companies not only compete by offering a wide variety of merchandise at competitive prices, but also implement e-commerce activities to improve customer service and business efficiency. The case study was the selected research methodology because this qualitative technique allows us to study in depth a small number of companies from the point of view of the people who work directly with IS/IT [20]. The method is an open, flexible and convenient approach to generating useful information to explain why small businesses are or are not making the most of ICT. The online review of the yellow pages section made it possible to identify the main commercial activities predominant in the city center. This information was used to define the type of small businesses to include in the study: haberdashery, office supplies and stationery, hardware stores, clothing and shoe stores, and beauty salons. Two companies in each of the identified commercial and service activities were randomly selected from the yellow pages list. The selected firms were visited in person, the research objectives were presented to the general manager or owner of each firm, and a formal request for participation was submitted. All companies contacted agreed to provide the required information and an interview appointment was made. The number of business units that make up each firm varied between 2-10 subsidiaries. The number of employees ranged from 2 to 10 in each business unit, resulting in a total number of employees not exceeding 50. Then, all participating companies are classified as micro or small companies according to the official classification. in Mexico. The data collection strategy employed a two-phase approach. The initial phase involved a) in-depth personal interviews with CEOs/owners and b) asking follow-up questions to technology users to determine how IS and IT are used within the company. The second phase involved additional interviews with the CEO and employees to gather information on the characteristics of the technical services used by the company, the CEO/owner's perceptions of the usefulness of IS/IT, the managerial actions implemented to encourage the use of SI/IT among employees. and the identification of mechanisms used to develop internal technical skills. Mechanisms relevant to this work included: training provided by external consultants hired by the company and employee participation in a knowledge network related to the use of IS/IT. All interviews were audio-recorded, notes were taken, and transcripts were prepared. These were the inputs for the classification of companies in terms of available IS/IT infrastructure and use. The use of IT is defined as the application of IT within the company for [6]. The use of IT in support of all business activities is considered an evolving process [18] starting with efficient internal and external communication through , continuing with the use of websites and e-commerce activities (order placement and online payments), progresses with the integration of e-business activities and culminates with a transformed organization that uses IS/IT as the basis for 256

293networking with business partners. Following this evolving e-commerce process, the interview guide included open-ended questions organized into the following five sections: 1) General information about the company, 2) Technologies used for communication and information exchange (telephone, fax, Internet, and web pages), 3) Technologies used to maintain and manage customer relationships (customer databases and web pages), and 4) Technologies used to increase process efficiency and control business operations (for example, banking online, accounting systems, inventory management systems, order processing Financial Planning Systems and Payroll Systems). Respondents were asked to describe how the available IS/IT are implemented, who the users are, and what specific tasks are supported by these technologies. 4. DISCUSSION OF RESULTS The number of computers in the selected business unit varied from one to nine. Four of the SMEs did not have an Internet connection, all the participants have implemented an accounting or financial control system and, except for one, all have tried to put an automatic inventory system into operation. Five companies have a website, but only two take advantage of existing ICTs for shopping and e-commerce. Four academic experts (both authors and two other professors) and two IT professionals ranked the participating companies in terms of the progress of the IS/IT utilization process. The use of IT/IS was based on three themes: the use of ICT to maintain communication with business partners and customers, the number of information systems in use, and the variety of business processes facilitated by IT/IS. The ordinal data was used as input to a non-metric Multidimensional Scaling (MDS) procedure (a group of techniques for spatial representation of data). The ALSCAL procedure in SPSS resulted in a two-dimensional solution with a stress of 1.4% and R 2 = The spatial plot of the MDS analysis allowed obtaining a graphical representation of the situation of the participating companies with respect to two critical dimensions (the axis of the spatial plot ): 1) IS/IC infrastructure (Vertical) and 2) degree of e-commerce applications (Horizontal). The position of a company with respect to others reflects its relative advantage in the use of IS/IT, but the distances between companies are only qualitative indicators of the The following section offers a detailed description of the situation of the companies classified in each quadrant. 1. Companies at a disadvantage. The companies in the southwest quadrant have a low ICT infrastructure, limited to telephone and/or fax. Some of the firms do not have an Internet connection (Mercería América, Solo Ajuste and Novensa). Four of the participating companies are located in this quadrant (out of 14). Some of the companies in this quadrant introduced information systems, a financial management system, and an inventory system, but these were not actually used because they were perceived by the CEO and/or users as inadequate or too complicated for Fit Only clothing) . Spreadsheets are not even used to organize financial reports, and word processing systems are only used to prepare letters and memos, and occasionally to print promotional material (Refaccionaria Jaimes). The company located further west, Mercería América, has a single MS-DOS computer that is used to run a very rudimentary and outdated inventory and accounting system that has not been updated since its implementation ten years ago. The staff at this firm do not even conduct regular online transactions such as tax payments or cash transfers. 2. Limited E-Commerce. Firms in the northwest quadrant have better ICT infrastructure than firms on the southern side of the map, but underutilize the infrastructure. The web pages are mainly used only to post online payments (e-commerce), although these applications are implemented (Mercería San Jorge). At the Stilisimo beauty salon, clients use part of the available computers to surf the Internet while waiting for the service. The inventory system for making purchase orders (e-purchasing) and controlling inventory is operated by a single employee (Stilisimo) and the information is updated only when there is a shortage. Email is occasionally used even for communication between business units (Refaccionaria Orsen). 3. Deployment of technical infrastructure. The southeast quadrant shows companies that have adopted a small number of IS/IT (only a few computers and basic information systems) but use it to the maximum to facilitate tasks and control activities. Zapatería Escorpio uses the available inventory system to keep track of purchase orders and each seller uses it to verify the accounting and financial system is used to track sales by product, control cash flows and identify the line of best-selling products. The Capa de Ozono Intranet facilitates the replenishment and processing of orders for the different business units while reducing delivery times. To introduce additional e-commerce applications, companies in this quadrant need to make additional investments in technology infrastructure and information systems, and complement their core business model to include customers who shop online. While Revelación en la moda, a clothing store, has created a customer database and segmented customers (by age and purchase frequency); emails are sent to frequent customers to inform them 4. Competitiveness. Companies in the Northeast Quadrant have the best IS/IT infrastructure and applications. There are only two companies clearly located, another is close to 0 -horizontal axis- in this quadrant (14% of the participants). One of them (RIME paper consortium) uses the Internet to communicate with major customers and inform them about new products and the status of orders. This stationery is the only one that has fully implemented a sales system to end customers (e- and market expansion. The Intranet links the stores of the consortium with a distribution center so that each store can place orders directly to avoid stock breaks and reduce replenishment times Systems in the appointment schedule of the beauty salon Leny Jose, to control cash flows through an accounting and financial system using spreadsheets, to manage inventory and keep a record of exchange for courtesy services The arrangement of the companies on the map allows the identification of a high variability among small companies regarding the use of IS/IT Most of the small retailers and service companies require additional investments in ICT and e-commerce applications Even the most disadvantaged companies have implemented accounting/financial and inventory systems, but they are not fully used (Novensa and Solo Adjustment), or they need 257

294to be updated or restructured according to the real needs of the business (Mercería América). By exploring the reasons for poor IS/ICT infrastructure and use, the perceived complexity of technology related to poor technical skills, inadequate technical support, and lack of awareness of the benefits related to automating business operations is addressed. they mentioned as the main problems. The cost of technology was mentioned only by the least advanced and smallest companies (M Adjustment and ABC). The factors that explain the differences between the quadrants were organized into three categories: and support; knowledge exchanges that favor the use of IT/IS and the availability of adequate technical support. A detailed discussion follows. 1. Disadvantaged. The CEOs/owners of these companies do not see a need for up-to-date systems or hardware. They consider that technical consultants and software vendors are therefore have implemented non-customized systems or no systems at all and the lack of support inhibits the use of IT/IS among emplo