Organized by:

ICEIS 2008 Abstracts

Conference Areas
- Databases and Information Systems Integration
- Artificial Intelligence and Decision Support Systems
- Information Systems Analysis and Specification
- Software Agents and Internet Computing
- Human-Computer Interaction

Special Sessions
- Computational Intelligence using Affinity Set
- Computer Supported Activity Coordination

ICEIS Doctoral Consortium

Workshops
- Pattern Recognition in Information Systems
- Modelling, Simulation,Verification and Validation of Enterprise Information Systems
- Security In Information Systems
- Natural Language Processing and Cognitive Science
- Ubiquitous Computing
- Model-Driven Enterprise Information Systems
- Technologies for Context-Aware Business Process Management
- RFID Technology - Concepts, Applications, Challenges
- Human Resource Information Systems

 Area 1 - Databases and Information Systems Integration Title: DATA INTEGRATION IN BIOLOGICAL MODELS Author(s): P. Mariño, F. P. Fontán, M. A. Domínguez and S. Otero Abstract: Biological research in agriculture needs a lot of specialized electronic sensors in order to fulfill different goals, like as: climate monitoring, soil and fruit assessment, control of insects and diseases, chemical pollutants, identification and control of weeds, crop tracking, and so on. That research must be supported by consistent biological models able to simulate diverse environmental conditions, in order to predict the right human actions before risky biological damage could be irreversible. In this paper is described an experimental distributed network based on climatic an biological wireless sensors, for providing real measurements in order to validate different biological models used for viticulture applications. Firstly is introduced the experimental network for field automatic data acquisition, as a system based in a distributed process. Following, the design of the wireless network is explained in detail, with a previous discussion about the state-of-the-art, and some measurements for viticulture research are pointed out. Finally future developments are stated. Title: CONCEPTUAL FRAMEWORK FOR XML SCHEMA MAPPING Author(s): Myriam Lamolle, Amar Zerdazi and Ludovic Menet Abstract: Today’s web-based applications and web services publish their data using XML, as this helps interoperability with other applications and services. The heterogeneity of XML data has led to recent research in schema matching, schema transformation, and schema integration for XML. In this paper, we propose an approach for mapping integration for XML schema. The basic idea is to drive direct as well as complex matches with their associated transformation operations from the computed element similarities. The representation of a mapping element in a source-to-target mapping clearly declares both the semantics correspondences as well as the access paths to access and load data from source into a target schema. We detail our mapping generation process and proceed in four steps to specify a formal representation of a source-to-target mapping in order to discover structural mappings between XML schemas. Title: DEVELOPING THE SKILLS NEEDED FOR REQUIREMENT ELICITATION IN GLOBAL SOFTWARE DEVELOPMENT Author(s): Miguel Romero, Aurora Vizcaíno and Mario Piattini Abstract: The requirement elicitation stage is that which is most critical in the development of a software product. However, this stage is not taught to the required depth, nor is time often invested in training students and practitioners in these tasks. There is currently a trend towards global software development (GSD) which complicates the process of elicitation requirements since, for instance, communication is more difficult because stakeholders are geographically distributed. Moreover, the elicitation in GSD involves a variety of characteristics that are not often taught in software engineering courses. This paper presents some of the most important factors which may affect elicitation in GSD. Furthermore, we propose techniques with which to help students and software engineers to develop some of the skills needed to carry out the elicitation process in GSD. Title: EMBEDDING XPATH QUERIES INTO SPARQL QUERIES Author(s): Matthias Droop, Markus Flarer, Jinghua Groppe, Sven Groppe, Volker Linnemann, Jakob Pinggera, Florian Santner, Michael Schier, Felix Schöpf, Hannes Staffler and Stefan Zugal Abstract: While XPath is an established query language developed by W3C for XML, SPARQL is a new query language developed by W3C for RDF data. Comparisons between the data models of XML and RDF and between the query languages XPath and SPARQL are missing. XML and XPath are older recommendations of the W3C than RDF and SPARQL and thus, currently more XML data and XPath queries are used in applications, but recently available SPARQL query evaluators do not deal with XML data and XPath queries. We have developed a prototype for translating XML data into RDF data and embedding XPath queries into SPARQL queries for the following two reasons: 1) We want to compare the XPath and XQuery data model with the RDF data model and the XPath query language with the SPARQL query language in order to show similarities and differences. 2) We want to enable SPARQL query evaluators to handle with XML data and XPath queries in order to support XPath processing and SPARQL processing in parallel. We have developed a prototype for the source-to-source translations from XML data into RDF data and from XPath queries into SPARQL queries. We have run experiments to measure the execution times of the translations, of XPath queries and their translated SPARQL queries. Title: IMPLEMENTATION OF ALGEBRA FOR QUERYING WEB DATA SOURCES Author(s): Iztok Savnik Abstract: The paper presents the implementation of query execution system Qios. It serves as a lightware system for the manipulation of XML data. Qios employs the relational technology for query processing. The main aim in the implementation is to provide a querying system that is easy to use and does not require any additional knowledge about the internal representation of data. The system provides robust and simple solutions for many design problems. We aimed to simplify the internal structures of query processors rooted in the design of relational and object-relational query processors. We propose efficient internal data structures for the representation of queries during all phases of query execution. The query optimization is based on dynamic programming and uses beam search to reduce the time complexity. The data structure for storing queries provides efficient representation of queries during the optimization process and the simple means to explore plan caching. Finally, main memory indices can be created on-the-fly to support the evaluation ofqueries. Title: A PROPOSAL OF SOFTWARE ARCHITECTURE FOR MULTIPLATFORM ENVIRONMENT APPLICATIONS DEVELOPMENT - A QUANTITATIVE STUDY Author(s): André Luiz de Oliveira, André Luis Andrade Menolli and Ricardo Gonçalves Coelho Abstract: Due to the problems caused by the increase crescent in the complexity and dimension of the software systems, becomes need the software patterns and beginnings adoption to work with those problems. For this reason, the software architecture appears as new discipline in the Software Engineering field that is already being applied thoroughly in several areas. However there is a proposals shortage of the addressed architectures to multiplatform systems development. In this work it is proposed a software architecture for development of those systems. The project of that architecture model is based in the Data Access Object, Facade and Singleton patterns. The validation process of that architecture model used the three layer software architecture model as evaluation parameter, in which its was developed a quantitative assessment of two implementations of an application, one using the three layer architecture model and other using the proposed model. This study used strong software engineering attributes, such as separation of concerns, coupling, cohesion and size like evaluation criteria. As results, its was verified that the adoption of the architecture model presented in this work provides a better separation concerns presents in the application components, in relation to the implementation using the three layer architecture model. Title: NETWORK EXTERNALITIES FOR ENTERPRISE RESOURCE PLANNING SOFTWARE - A RESEARCH ROADMAP Author(s): Georg Peters, Florian Thoma and Richard Weber Abstract: Presently the ERP software market is characterized by a fierce competition of some of the largest software producers of the world. While SAP tries to defend its position as the market leader, Oracle, founded as database vendor, has heavily invested into its ERP business by taking over large related software producers like Peoplesoft or Siebel. Microsoft also bought ERP vendors as a basis for its MBS (Microsoft Business Solution) division. IBM and Sage are further players in the multi-billion market of ERP software. Currently the market faces two big challenges. (1) The classic ERP market addressing large corporations has become mature and is characterized by growth through displacement today. However the ERP market for SME still promises big growth rates. (2) The classic monolithic ERP systems are presently replaced by new middleware based technologies that provide open platforms to easily integrate business applications of third party software producers. Network externalities will significantly influence which ERP vendor(s) will dominate the future market. Therefore we will qualitatively analyse the current strengths and weaknesses of the presently most promising ERP producer with respect to network externalities. Title: RESOURCE AGGREGATION IN DIGITAL LIBRARIES - STATIC VS DYNAMIC PROTOCOLS Author(s): Pedro Almeida, Marco Fernandes, Joaquim Arnaldo Martins and Joaquim Sousa Pinto Abstract: This article analyses development of static and dynamic protocols to aggregate metadata and resources from heterogeneous systems. In particular, it compares the advantages and drawbacks of both types of protocols and presents a case study of the University of Aveiro Information System as an example of the possibilities of dynamic resource aggregation systems. Title: A LOCKING PROTOCOL FOR DOM API ON XML DOCUMENTS Author(s): Yin-Fu Huang and Mu-Liang Guo Abstract: In this paper, we developed a new DOM API locking protocol (DLP) that adopts the DOM structure for locking. In order to enhance the concurrency and system performance, we studied operation conflicts more detailedly. The proposed DLP supports more update operations than the others, and does not imply more locking costs. Finally, we conducted several experiments to compare with the others and to observe the DLP performance under different workload parameters. Title: SQS - A SECURE XML QUERYING SYSTEM Author(s): Wei Li and Cindy Chen Abstract: Contemporary XML database querying systems have to deal with a rapidly growing amount of data and a large number of users. As a consequence, if access control is used to protect sensitive XML data at a fine-grained level, it is inefficient when it comes to query evaluation, since it is difficult to enforce access control on each node in an XML document when the user's view needs to be computed. We design and develop a secure XML querying system, namely SQS, where caching is used to store query results and security information. Depending on whether there is a cache hit, user queries are rewritten into secure system queries that are executed either on the cached query results or on the original XML document. We propose a new cache replacement policy LSL, which updates the cache based on the security level of each entry. We also demonstrate the performance of the system. Title: AN IMPLEMENTATION OF XML DATA INTEGRATION Author(s): Weidong Pan, Jixue Liu and Jiashen Tian Abstract: Data integration is essential for building modern enterprise information systems. This paper investigates the implementation of XML data integration through transforming XML data from different data sources into a common global schema. Following the work our research group has done earlier, this paper is focused on the implementation of XML data transformation operations. At the first, the proposed methodology to realize XML data integration is sketched. Then, the representation of an XML DTD and document and other related concepts for transforming XML data are presented. The transforming operations defined by a set of operators are outlined focusing on the required functionality for data integration. Building upon these, the implementation of the data transformation operations is investigated. The current implementation is reported with a simplified example illustrating how the methodology can be used in practical enterprise integration applications. Title: SOFTWARE MEASUREMENT BY USING QVT TRANSFORMATIONS IN AN MDA CONTEXT Author(s): Beatriz Mora, Félix García, Francisco Ruiz, Mario Piattini, Artur Boronat, Abel Gómez, José Á. Carsí and Isidro Ramos Abstract: At present the objective of obtaining quality software products has led to the necessity of carrying out good software processes management in which measurement is a fundamental factor. Due to the great diversity of entities involved in software measurement, a consistent framework is necessary to integrate the different entities in the measurement process. In this work a Software Measurement Framework (SMF) is presented to measure any type of software entity. In this framework, any software entity in any domain could be measured with a common Software Measurement metamodel and QVT transformations. This work explains the three fundamental elements of the Software Measurement Framework (conceptual architecture, technological aspects and method). These elements have all been adapted to the MDE paradigm and to MDA technology, taking advantage of their benefits within the field of software measurement. An example which illustrates the framework’s application to a concrete domain is furthermore shown. Title: ODDI - A FRAMEWORK FOR SEMI-AUTOMATIC DATA INTEGRATION Author(s): Paolo Ceravolo, Ernesto Damiani, Marcello Leida, Zhan Cui and Alex Gusmini Abstract: Data Integration systems are used to integrate heterogeneous data sources in a single view. Recent works on Business Intelligence do highlight the need of on-time, trustable and sound data access systems. This require for method based on a semi-automatic procedure that can provide reliable results. A crucial factor for any semi automatic algorithm is based on the matching operators implemented. Different categories of matching operators carry different semantics. For this reason combining them in a single algorithm is a non trivial process that have to take into account a variety of options. This paper proposes a solution based on a categorization of marching operators that allow to group similar attributes on a semantic rich form. This way we define all the information need in order to create a mapping. Then Mapping Generation is activated only on those set of elements that can be queried without violating any integrity constraints on data. Title: DYNAMIC SEMI-MARKOVIAN WORKLOAD MODELING Author(s): Nima Sharifimehr and Samira Sadaoui Abstract: In this paper, we introduce a novel Markovian model which is an intuitive combination of Semi-Markov and Dynamic Markov models. The proposed model is designed with the intention to be an efficient statistical modeling tool to capture both actions intervals patterns and sequential behavioral patterns. We show the applicability of our approach to model workload of enterprise application servers. Furthermore, a formal definition of this model and detailed algorithms for its implementation are illustrated. This prepares a firm ground for academic researchers to investigate other possible applications. Also, we present evaluation results to prove the accuracy of our novel dynamic semi-Markov model. Title: MODEL-DRIVEN GENERATION AND OPTIMIZATION OF COMPLEX INTEGRATION PROCESSES Author(s): Matthias Boehm, Uwe Wloka, Dirk Habich and Wolfgang Lehner Abstract: The integration of heterogeneous systems is still one of the main challenges in the area of data management. Due to the trend towards heterogeneous system environments, the service integration gains in importance. This has been the reason for several extensions to the database technology in order to achieve the interoperability requirements. Further, the different levels of integration approaches result in a large number of different integration systems. Due to these proprietary solutions and the lack of a standard for data-intensive integration processes, the model-driven development—following the paradigm of Model-Driven Architecture (MDA)—is advantageous. This paper contributes to the model-driven development of complex and data-intensive integration processes. In addition, optimization possibilities offered by this model-driven approach are illustrated, and first evaluation results correlated to our optimization strategies are discussed. Title: MESSAGE INDEXING FOR DOCUMENT-ORIENTED INTEGRATION PROCESSES Author(s): Matthias Boehm, Uwe Wloka, Dirk Habich and Wolfgang Lehner Abstract: The integration of heterogeneous systems is still an evolving research area. Due to the complexity of integration processes, there are challenges for the optimization of integration processes. Message-based integration systems, like EAI servers and workflow process engines, are mostly document-oriented using XML technologies in order to achieve suitable data independence from the different and particular proprietary data representations of the supported external systems. However, such an approach causes large costs for single-value evaluations within the integration processes. At this point, message indexing, adopting extended database technologies, could be applied in order to reach better performance. In this paper, we introduce our message indexing structure MIX and discuss and evaluate immediate as well as deferred indexing concepts. Further, we describe preliminary adaptive index tuning techniques. Title: A MEDICAL INFORMATION SYSTEM BASED ON ORACLE TECHNOLOGIES Author(s): Liana Stanescu, Dumitru Dan Burdescu, Cosmin Stoica Spahiu, Anca Ion and Cristian Guran Abstract: The paper presents an information system based on Oracle technologies (Oracle Database, Oracle interMedia and Oracle JDeveloper) dedicated for managing and querying medical multimedia databases. The database contains images related to the internal medicine area. This on-line application allows creation of complex medical files of patients that can be viewed and updated both by internist and general practitioner. The main functions of the application are: managing patients contact information, examinations, imagery and content-based visual query using color, texture and shape characteristics. It can be used in individual offices, laboratories or in the hospital clinics and departments. The application provides security and confidentiality for patient’s data. Title: A GENERATIVE APPROACH TO IMPROVE THE ABSTRACTION LEVEL TO BUILD APPLICATIONS BASED ON THE NOTIFICATION OF CHANGES IN DATABASES Author(s): J. R. Coz, R. Heradio Gil, J. A. Cerrada Somolinos and J. C. López Ruiz Abstract: This paper highlights the benefits, in terms of quality, productivity and time-to-market, of applying a generative approach to increase the abstraction level to build applications based on the notification of changes in databases. Most of the database management systems maintain meta-tables with information about all stored tables; this information is used in an automatic process to define the software product line (SPL) variability. The remaining variability can be specified by means of domain specific languages. Code generators can automatically query the meta-tables, analyze the input specifications and configure the current product. The paper also introduces the Exemplar Driven Development process to incrementally develop code generators and the Exemplar Flexibilization Language that supports the process implementation. Title: ERP TRENDS - MOBILE APLICATIONS AND PORTAL Author(s): Octavian Dospinescu, Doina Fotache, Bogdanel Adrian Munteanu and Luminiţa Hurbean Abstract: Given the business globalization, complete integration is a major goal of information resource management. The applications and data are combined in integrated entities providing not only access to information, but also internal and external economic process management. The first case refers to organizational internal processes and is achieved with ERP packages. External integration concerns customer and supply chain connection to the organizational environment for the performance of economic processes and it cannot be achieved if there is no internal information coherence (an ERP system). In the last decade, ERP has continued to expand, blurring the boundaries of the core system. The number of modules and the extended functionality offered in the ERP suites have progressively grown making integration a greater challenge for the enterprise. Simultaneously, the market trends – consolidation, verticalization and specialization – have a broader effect than just on ERP. Also, wireless applications provide new opportunities for organizations, enabling access to relevant information from anywhere and at anytime. In order to take advantage from the features of ubiquitous environment, ERP systems have to support the mobile behaviour of their users. The present paper is an exploratory analysis of the current stage regarding the mobile applications and services for companies, the achievements in the field and proposes an architecture model for the mobile services starting from the necessary unidentified functionalities for a portal of mobile services. Besides the general architecture of a portal of mobile applications for companies, a set of minimal functionalities for implementation is proposed in order to ensure the promotion and use of services. Title: INTEGRATING LABOR SKILLS CERTIFICATION WITH TRADITIONAL TRAINING FOR ELECTRIC POWER OPERATORS Author(s): R. Molina, I. Paredes, M. Domínguez, L. Argotte and N. Jácome Abstract: In this paper the integration of a traditional training system with a competence management model is conceptually described. The resulting e-system is accessed through powerful Web interfaces and contains a comprehensive database that maintains information of the two models. The traditional model emphasizes the contractual worker training rights and the skills model the alignment of the human talent with the mission and objectives of the company. The paper describes the specific traditional training model of CFE (Federal Commission of Electricity), its competences model and the integration of the two models following a thematic contents approach. Title: AN INTEGRATED MODEL FOR MANAGERIAL AND PRODUCTIVE ACTIVITIES IN SOFTWARE DEVELOPMENT Author(s): Daniel Antonio Callegari, Maurício Covolan Rosito, Marcelo Blois Ribeiro and Ricardo Melo Bastos Abstract: Software organizations are constantly looking for better solutions when designing and using well-defined software processes for the development of their products and services. However, many software development processes lack for more support on project management issues. This work proposes a model that integrates the concepts of the PMBOK and those available on RUP, helping not only process integration but also assisting managers in the decision making process during project planning. We present the model and the results from a qualitative exploratory evaluation of a tool that implements the proposed model, conducted with project managers from nine companies. Title: EMPIRICAL STUDY OF ERP SYSTEMS IMPLEMENTATION COSTS IN SWISS SMES Author(s): Catherine Equey, Rob J. Kusters, Sacha Varone and Nicolas Montandon Abstract: Based on sparse literature investigating the cost of ERP systems implementation, our research uses data from a survey of Swiss SMEs having implemented ERP in order to test cost drivers. The main innovation is the proposition of a new classification of cost drivers that depend on the enterprise itself, rather than on ERP. Particular attention is given to consulting fees as a major factor of implementation cost and a new major cost driver has come to light. “Consultant experience”, not previously mentioned as such in literature, appears as an important aspect of ERP implementation cost. Particular attention must be paid to this factor by the ERP implementation project manager. Title: DISCOVERING VEILED UNSATISFIABLE XPATH QUERIES Author(s): Jinghua Groppe and Volker Linnemann Abstract: The satisfiability problem of queries is an important determinant in query optimization. The application of a satisfiability test can avoid the submission and the unnecessary evaluation of unsatisfiable queries, and thus save processing time and query costs. If an XPath query does not conform to the constraints in a given schema, or the constraints of an XPath query itself are inconsistent with each other, the evaluation of the query will return an empty result for any valid XML document, and thus the query is unsatisfiable. Therefore, we propose a schema-based approach to filtering XPath queries not conforming to the constraints in the schema and XPath queries with conflicting constraints. We present a complexity analysis of our approach, which proves that our approach is efficient at typical cases. We present an experimental analysis of our developed prototype, which shows the optimization potential of avoiding the evaluation of unsatisfiable queries. Title: A SOFTWARE ARCHITECTURE FOR KNOWLEDGE DISCOVERY IN DATABASE Author(s): Maria Madalena Dias, Lúcio Gerônimo Valentim and José Rafael Carvalho Abstract: Currently, most of the companies have computational systems to provide support to the operational routines. Many times, those systems are not integrated, thus generating duplicated and inconsistent information. Such situation makes difficult the search for necessary and trustworthy information for decision-making. Technologies of data warehousing and data mining have appeared to solve that type of problem. Many existing solutions do not enclose those two technologies; some of them are directed to the construction of a data warehouse and others to the application of data mining techniques. There are OLAP tools that do not enclose the activities of preparation of data. Th4is paper presents a reference architecture and a software architecture that define the components necessary for implementation of knowledge discovery systems in database including activities of data warehouse and data mining. Some standards of best practices in projects and in software development were used since the definition until the implementation. Title: IMPROVING SOFTWARE TEST STRATEGY WITH A METHOD TO SPECIFY TEST CASES (MSTC) Author(s): Edumilis Méndez, María Pérez and Luis E. Mendoza Abstract: An interesting difference between tests and other disciplines of the software development process is that they constitute a task that essentially identifies and evidences the weaknesses of the software product. Four relevant elements are considered when defining tests namely, reliability, cost, time and quality. Time and cost shall increase to the extent reliable tests and quality software are desired, but what does it take to make actors understand that tests should be seen as a security network? If quality is not there before starting the tests, it will not be there upon their completion. Accordingly, how can we lay out a trace between tests and functional and non-functional requirements of the software system? This Article is aimed at proposing a method that allows for specifying test cases based on use cases, by incorporating elements to verify and validate traceability among requirements management, analysis & design, and tests. This initiative originated as a response to the request of a software developing company of the Venezuelan public sector. Title: DISTRIBUTED SYSTEM FOR DISCOVERING SIMILAR DOCUMENTS - FROM A RELATIONAL DATABASE TO THE CUSTOM-DEVELOPED PARALLEL SOLUTION Author(s): Jan Kasprzak, Michal Brandejs, Miroslav Křipač and Pavel Šmerk Abstract: One of the drawbacks of e-learning methods such as Web-based submission and evaluation of students' papers and essays is that it has become easier for students to plagiarize the work of other people. In this paper we present a computer-based system for discovering similar documents, which has been in use at Masaryk University in Brno since August 2006, and which will also be used in the forthcoming Czech national archive of graduate theses. We also focus on practical aspects of this system: achieving near real-time response to newly imported documents, and computational feasibility of handling large sets of documents on commodity hardware. We also show the possibilities and problems with parallelization of this system for running on a distributed cluster of computers. Title: ARCHITECTURE FOR END USER-DRIVEN COMPOSITION OF UNDERSPECIFIED, HUMAN-CENTRIC BUSINESS PROCESSES Author(s): Todor Stoitsev, Stefan Scheidl, Felix Flentge and Max Mühlhäuser Abstract: Enterprises are constantly struggling to optimize their business processes in order to gain competitive advantage and to survive in the fast evolving global market. Often, the only ones to understand the matter and complexity of these processes are the people, who actually execute them. This raises the need for novel business process management approaches, which can enable business users to proactively express process knowledge and to participate in the business process management and design according to their actual expertise and problem solving strategies. The presented paper describes an architecture, which supports a framework for end user-driven composition and management of underspecified, human-centric business processes. The solution builds up on email-integrated task management and enables dynamic generation of decentralized-emerging process structures through web service-based activity tracking. The captured process execution examples are shared in central enterprise repositories for further adaptation and reuse. This enables “seeding, evolutionary growth, and reseeding” of user-defined, weakly-structured process models towards global best-practice definitions. Title: A STUDY OF DATA QUALITY ISSUES IN MOBILE TELECOM OPERATORS Author(s): Naiem Khodabandhloo Yeganeh and Shazia Sadiq Abstract: Telecommunication operators currently servicing mobile users world-wide have dramatically increased in the last few years. Although most of the operators use similar technologies and equipment provided by world leaders in the field such as Ericsson, Nokia-Siemens, Motorola, etc, it can be observed that many vendors utilize propriety methods and processes for maintaining network status and collecting statistical data for detailed monitoring of network elements. This data forms their competitive differentiation and hence is extremely valuable for the organization. However, in this paper we will demonstrate through a case study based on a GSM operator in Iran, how this mission critical data can be fraught with serious data quality problems, leading to diminished capacity to take appropriate action and ultimately achieve customer satisfaction. We will further present a taxonomy of data quality problems derived from the case study. A comprehensive survey of reported literature on data quality is presented in the context of the taxonomy, which can not only be utilized as a framework to classify and understand data quality problems in the telecommunication domain but can also be used for other domains with similar information systems landscapes. Title: OUTLINING VALUE ASSESSMENT FOR SOFTWARE REQUIREMENTS Author(s): Pasi Ojala Abstract: Understanding software requirements and customer needs is vital for all SW companies around the world. Lately clearly more attention has been focused also on the costs, cost-effectiveness, productivity and value of software development and products. This study outlines concepts, principles and process of implementing a value assessment for SW requirements. The main purpose of this study is to collect experiences whether the value assessment for product requirements is useful for companies, works in practice, and what are the strengths and weaknesses of using it. This is done by implementing value assessment in a case company step by step to see which phases possibly work and which phases possibly do not work. The practical industrial case shows that proposed value assessment for product requirements is useful and supports companies trying to find value in their products. Title: TREE EMBEDDING AND XML QUERY EVALUATION Author(s): Yangjun Chen Abstract: With the growing importance of XML in data exchange, much research has been done in providing flexible query mechanisms to extract data from XML documents. A core operation for XML query processing is to find all occurrences of a twig pattern Q (or small tree) in a document T. Prior work has typically decomposed Q into binary structural relationships, such as parent-child and ancestor-descendant relations, or root-to-leaf paths. The twig matching is achieved by: (i) matching the binary relationships or paths against XML databases, and (ii) using the join algorithms to stitch together all the matching binary relationships or paths. In this paper, we propose a new algorithm for this task with no costly path joins or join-like operations involved. The time and space complexities of the algorithm are both bounded by O(|D|Þ|Q|), where D is a largest data stream associated with a node v of Q such that each u Œ D has the same element name as v. Our experiments show that our method is efficient in supporting twig pattern queries. Title: A FRAMEWORK TO SUPPORT INTEROPERABILITY AND MULTI-CHANNEL DELIVERY AMONG HETEROGENEOUS SYSTEMS: TRAME PROJECT Author(s): Ugo Barchetti, Alberto Bucciero, Luca Mainetti and Stefano Santo Sabato Abstract: The e-commerce has become a point of strength for the companies that desire to increase their billing enlarging their clients park and reducing the management costs. Therefore the demand has been born to use platforms able to support the interoperability between heterogeneous systems and the multi-channelling with variegated devices to access different services in reliable manner and to allow, so, a spread of the market toward partner with particular needs. Furthermore, many available services have been typically designed for a single channel the web one. In a real world scenario, an ever-growing number of users take advantage of different kind of communication channel and devices. In this paper we propose a B2B oriented framework able to support the interoperability among heterogeneous systems developed according to the ebXML reference model for the business messages interchange suitable to any B2B marketplace that foresees the commercial interaction among partners with different roles and profiles (including channel and device). Such framework has been developed and experimented for the TRAME research project that has as objective to create a room of district compensation of the peaks of productive ability demands within the Textile/Clothing sector and to give the needed infrastructure for the business messages exchange among the partners of the productive spinneret. Title: GUI GENERATION BASED ON LANGUAGE EXTENSIONS - A MODEL TO GENERATE GUI, BASED ON SOURCE CODE WITH CUSTOM ATTRIBUTES Author(s): Marco Monteiro, Paula Oliveira and Ramiro Gonçalves Abstract: Due to data-driven application’s nature and its increasing complexity, developing its user interface can be a repetitive and time-consuming activity. Consequently, developers tend to focus more on the user interface aspects and less on business related code. In this paper, we’re presenting a novel approach to graphical user interface development for data-driven applications, that allows developers to refocus on the source code and concentrate their efforts on application’s core logic. The key concept behind our approach is the generation of concrete graphical user interface from a source code based model, which includes the original source code metadata and non-intrusive declarative language extensions that describes the user interface structure. Concrete user interface implementation will be delegated to specialized software packages, developed by external entities, that provides complete graphical user interfaces services to the application. When applying our approach on data-driven application development, we’re expecting better separation of business and presentation issues and faster graphical user interface development. Title: A NOVEL APPROACH TO SUPPORT CHANGE IMPACT ANALYSIS IN THE MAINTENANCE OF SOFTWARE SYSTEMS Author(s): Guenter Pirklbauer and Michael Rappl Abstract: The costs for enhancing and maintaining software systems are 75% of the total development costs. It is therefore important to provide appropriate methods, techniques and tools to support the maintenance phase of the software life cycle. One major maintenance task is the analysis and validation of change impacts. Existing approaches address change impact analysis, but using them in practice raises specific problems. Tools for change impact analysis must be able to deal with inconsistent requirements- and design-models, and with large legacy systems and systems, which are distributed across data processing centers. We have developed an approach and an evaluation framework to overcome these problems. The proposed approach combines methods of dynamic dependency analysis and change coupling analysis to detect physical and logical dependencies. The implementation of the approach - a framework consisting of methods, techniques and tools - will support both the management and developers. The goal is to detect low-level artefacts and dependencies based on only up-to-date and system-conform data, including logfiles, the service repository, the versioning system database and the change management system database. With the assistance of a data warehouse, the framework will enable dynamic querying and reporting. Title: SEMANTIC DATA INTEGRATION FOR PROCESS ENGINEERING DESIGN DATA Author(s): Andreas Wiesner, Jan Morbach and Wolfgang Marquardt Abstract: During the design phase of a chemical plant, information is typically created by various software tools and stored in different documents and databases. Unfortunately, the further processing of the data is often hindered by the structural, syntactic and semantic heterogeneities of the data sources. In fact the merging and consolidation of the data becomes virtually prohibitive when exclusively conventional database technologies are employed. Therefore, XML technologies as well as specific domain ontologies are increasingly applied in the context of data integration. Hence, this contribution gives an outline on an ongoing research project at the authors’ institute, which aims at the development of a prototypical software tool, which exploits the benefits of semantic as well as conventional database technologies, for the integration and consolidation of design data. Both, ontology and software development is performed in close cooperation with partners from the chemical and software industries to ensure their compliance with the requirements of industrial practice. Title: ERP IMPACT ON NUMBER OF EMPLOYEES AND PERSONNEL COSTS IN SMALL AND MEDIUM SIZED ENTERPRISES - A PANEL DATA APPROACH Author(s): Jose Esteves Sousa and Victor Bohorquez Lopez Abstract: Enterprise Resource Planning (ERP) vendors have emphasized a positive impact of their ERP projects in company performance and in costs reduction. Recently, some researchers have started to analyze the impact on business performance of the organizational changes that complement IT investments. This study attempts to analyze the impact of ERP implementations in the SMEs’ number of employees and personnel costs. We have collected information of 168 Spanish SME during the period 1997-2005, concerning the type of purchased ERP, implementation period, number of employees, personnel costs and some financial indicators. We use two panel data models to compare and analyze the number of employees’ evolution and the related personnel costs, before and after ERP implementations. Our preliminary findings suggest that as bigger the SME as lower will be the decrease its number of employees. On the other hand, ERP impacts positively in personnel costs. This trend to increase personnel costs can be explained in the sense that SMEs using an ERP system need people not only with specific operative skills but also with a very holistic approach to understand and obtain maximum benefits to the ERP system. Title: MEASURING CRITICAL SUCCESS FACTORS IN ERP PROJECTS - RESULTS FROM A CASE STUDY IN A SME Author(s): Jos J. M. Trienekens and Pedro van Grinsven Abstract: Over the past decade many organizations are increasingly concerned with the implementation of Enterprise Resource Planning (ERP) systems. This counts for both large and small and medium sized companies. Implementation can be considered to be a process of change influenced by different so-called critical success factors (CSF) of type organizational, technological and human. This paper reports on the development of a measurement approach for managing CSF in an ERP implementation project in a small and medium sized company (SME). Critical success factors are being derived from project goals and subsequently measured in this project to monitor and control the implementation project. Title: TOWARDS A CLOSED-LOOP BUSINESS INTELLIGENCE FRAMEWORK Author(s): Oscar Mangisengi and Ngoc Thanh Huynh Abstract: Bringing sense-and-respond to business intelligence systems for making decisions become essential for organizations in the foreseeable future. An existing challenge is that organizations need to make business processes as the centrepiece of their strategy to enable the processes performing at higher level and to efficiently improve these processes in the global competition. Traditional Data Warehouse and OLAP tools, which have been used for data analysis in the Business Intelligence (BI) systems, are inadequate for delivering information faster to make decisions and to earlier identify failures of a business process. In this paper we propose a closed-loop BI framework that can be used for monitoring and analyzing a business process of an organization, optimizing the business process, and reporting cost based on activities. Business Activity Monitoring (BAM) as data resource of a control system is as the heart of this framework. Furthermore, to support such a BI system, we integrate an extracting, transforming, and loading tool that works based on rules and state of business process activities. The tool can automatically transfer data into a data warehouse when conditions of rule and state have been satisfied. Title: INTEROPERABILITY IN THE PETROLEUM INDUSTRY Author(s): Jon Atle Gulla Abstract: The petroleum industry is a technically challenging business with highly specialized companies and complex operational structures. Several terminological standards have been introduced over the last few years, though they address particular disciplines and cannot help people collaborate efficiently across disciplines and organizational borders. This paper discusses the results from the industrally driven Integrated Information Platform project, which has developed and formalized an extensive OWL ontology for the Norwegian petroleum business. The ontology is now used in production reports, and the ontology is considered vital to semantic interoperability and the concept of integrated operations on the Norwegian continental shelf. Title: A GOAL METHOD FOR CONCEPTUAL DATA WAREHOUSE DESIGN Author(s): Leopoldo Zepeda, Ramon Zatarain and Matilde Celma Abstract: A Data Warehouse (DW) is a database used for analytical processing whose principal objective is to maintain and analyze historical data (Kimball, R., Ross, M). Since the introduction of multidimensional data model as modelling formalism for DW design, several techniques have been proposed to capture multidimensional data at the conceptual level. In this paper, we present a goal-oriented method for DW analysis requirements. This paper shows how goal modelling contributes to a logical scoping and analysis of the application domain to elicit the information requirements, from which the conceptual multidimensional schema is derived. Title: VALUE-BASED SOFTWARE PROJECT MANAGEMENT - A BUSINESS PERSPECTIVE ON SOFTWARE PROJECTS Author(s): Anderson Itaborahy, Káthia Marçal de Oliveira and Rildo Ribeiro dos Santos Abstract: When an organisation decides to invest in a software project it expects to get some value in return. Thus, decisions in software project management should be based on this expected value by trying to understand and influence its driver factors. However, despite the significant progress software engineering and project management has experienced in recent years, both disciplines work in a ‘value neutral’ context, by which is meant focusing on technical correctness and adherence to plans. This paper intends to contribute to a view of software project management based on business value by identifying value determinant factors in a software project and proposing some tools for recording and monitoring them. The proposed approach will be tested in a real project, in order to evaluate its applicability and usefulness in decision-making. Title: INTRA-ORGANISATIONAL ERP LIFECYCLE KNOWLEDGE ISSUES Author(s): Greg Timbrell Abstract: A study of 27 ERP systems in the Queensland Government revealed 41 issues clustered into seven major issue categories. Two of these categories described intra- and inter-organisational knowledge-related issues. This paper describes and discusses the intra-organisational knowledge issues arising from this research. These intra-organisational issues include insufficient knowledge in the user base, ineffective staff and knowledge retention strategies, inadequate training method and management, inadequate helpdesk knowledge resources, and finally, under-resourced helpdesk. When barriers arise in knowledge flows from sources such as implementation partner staff, training materials, trainers, and help-desk staff, issues such as those reported in this paper arise in the ERP lifecycle. Title: DESIGNING XML PIVOT MODELS FOR MASTER DATA INTEGRATION VIA UML PROFILE Author(s): Ludovic Menet and Myriam Lamolle Abstract: The majority of Information Systems is concerned by heterogeneity in both data and solutions. The use of this data thus becomes complex, inefficient and expensive in business applications. The issues of data integration, data storage, design and exchange of models are strongly linked. The need to federate data sources and to use standard modelling formalism is apparent. In this paper, we focus on mediation solutions based on XML architecture. The integration of the heterogeneous data sources is done by defining a pivot model. This model uses the standard XML Schema allowing the definition of complex data structures. We introduce features of the UML formalism, through a profile, to facilitate the collaborative definition and the exchange of these models, and to introduce the capacity to express semantic constraints in XML models. These constraints will be used to perform data factorisation and to optimise data operations. Title: TOWARDS THE NEXT GENERATION OF SERVICE-ORIENTED FLEXIBLE COLLABORATIVE SYSTEMS - A BASIC FRAMEWORK APPLIED TO MEDICAL RESEARCH Author(s): Jonas Schulte, Thorsten Hampel, Konrad Stark, Johann Eder and Erich Schikuta Abstract: Collaborative systems have to support specific functionalities in order to be useful for special fields of application and to fulfil those requirements. In this paper we introduce the Wasabi framework for collaborative systems, which is characterised by flexibility and adaptability. The framework implements a service oriented architecture and integrates different persistence layers. The requirement analysis for the Wasabi CSCW system is presented in the context of a collaborative environment for medical research which has strict requirements concerning data integrity. This paper shows the results of the requirement analysis and how these are implemented in the Wasabi architecture. Title: MIGRATION BETWEEN CONTENT MANAGEMENT SYSTEMS - EXPLOITING THE JSR-170 CONTENT REPOSITORY STANDARD Author(s): Michael Nebeling, Grace Rumantir and Lindsay Smith Abstract: Content management systems (CMS) have evolved in various different ways. Even amongst CMS support-ing JSR-170, the recent Java standard to organise and access content repositories, incompatible content structures exist due to substantial differences in the implemented content models. This can be of primary concern when migration between CMS is required. This paper proposes a framework to enable automated migrations between CMS in the absence of con-sistency and uniformity in content structures. This framework serves as a starting point of a body of re-search to design a common content model which can be implemented in future CMS to facilitate migration between CMS based on JSR-170 and improve integration into existing information systems environments. It is illustrated via a simple website created using two of the most popular open-source CMS supporting JSR-170, namely Magnolia and Alfresco. A model-based approach towards a generalised content structure is postulated to resolve the differences between the proprietary content structures as identified in the visualisa-tion of the simple website created. The proposed model has been implemented in Jackrabbit, the JSR-170 reference implementation, and the proposed framework therefore contains simple methods to transform con-tent structures between Magnolia and Alfresco using this Jackrabbit implementation as an intermediator. Title: A VISUAL SPECIFICATOIN TOOL FOR EVENT-CONDITION-ACTION RULES SUPPORTING WEB-BASED DISTRIBUTED SYSTEM Author(s): Wei Liu, Ying Qiao, Xiang Li, Kang Zhong, Hongan Wang and Guozhong Dai Abstract: Specifying Event-Condition-Action (ECA) rules is an important issue in the domain of active database. Current specification tools for ECA rules include visual specification tools and textual specification tools based on XML. Here, the visualization of ECA rules provides an easy-to-use interface in design/analysis tools for active database queries while the XML-based representation allows the exchange of ECA rules in a web-based distributed environment. Thus, a specification tool with advantages of both visual representation and XML-based representation is needed. In this paper we present and implement a new visual specification tool for ECA rules, called VSTE, to address this issue. We also use a web-based smart home system to evaluate our work. Title: GATHERING PRODUCT DATA FROM SMART PRODUCTS Author(s): Jan Nyman, Kary Främling and Vincent Michel Abstract: The enabling of data produced by product embedded sensor devices for use in product development could greatly benefit manufactures, while opening up new business opportunities. Currently products such as cars already have embedded sensor devices, but the data is usually not available for analysis in real-time. We propose that a world-wide, inter-organizational network for product data gathering should be created. The network should be based on open standards so that it can be widely adopted. It is important that a common, interoperable solution is accepted by all companies, big or small, to enable innovative new services to be developed. In this paper, the concept of the Internet of Things (IoT) is described. The PROMISE project is presented, and a distributed messaging system for product data gathering developed within the project is introduced. Practical experiences related to the implementation of the messaging system in a real application scenario are discussed. Title: SEMANTIC INTEROPERABILITY - INFORMATION INTEGRATION BY USING ONTOLOGY MAPPING IN INDUSTRIAL ENVIRONMENT Author(s): Irina Peltomaa, Heli Helaakoski and Juha Tuikkanen Abstract: Interoperability requires two components technical and information integration. Most of the enterprises have solved the problem of technical integration but at the moment they are struggling with information integration. The challenge in information integration is to preserve the meaning of information in different context. Semantic technologies can provide means for information integration by representing the meaning of information. This paper introduces how to use semantics by developing ontology models based on enterprise information. Different ontology models from diverse sources and applications can be mapped together in order to provide integrated view for different information sources. Furthermore this paper describes the process of ontology development and mapping. The domain area of this case study is heavy industrial environment with multiple applications and data sources. Title: A METADATA MODEL FOR KNOWLEDGE DISCOVERY IN DATABASE Author(s): José Rafael Carvalho and Maria Madalena Dias Abstract: Metadata are deeply necessary in an environment of Knowledge Discovery in Database (KDD), once they are the responsible for the whole documentation of information on data that integrate a data warehouse (DW), being the latter used to store the data about the organization business. Such data usually come from several data sources, thus the metadata format should be independent on the platform. The system inter-operability can be solved, by using XML (Extensible Markup Language). Therefore, in the present paper a metadata model for KDD, in XML format, and a metadata manager are presented. The manager was implemented in Java, which provides support to the model presently proposed. Title: TOWARDS A SEMIOTIC QUALITY FRAMEWORK OF SOFTWARE MEASURES Author(s): Erki Eessaar Abstract: Each software entity should have as high quality as possible in the context of limited resources. A software quality metric is a kind of software entity. Existing studies about the evaluation of software metrics do not pay enough attention to the quality of specifications of metrics. Semiotics has been used as a basis in order to evaluate the quality of different types of software entities. In this paper, we propose a multidimensional, semiotic quality framework of software quality metrics. We apply this framework in order to evaluate the syntactic and semantic quality of two sets of database design metrics. The evaluation shows that these metrics have some quality problems. Title: DATA MANAGEMENT AND INTEGRATION WITHIN COLLABORATIVE WORKING ENVIRONMENTS Author(s): Assel Matthias and Kipp Alexander Abstract: With increasingly distributed and inhomogeneous resources, sharing knowledge, information, or data becomes more and more difficult and manageable for both, end-users and providers. To reduce administrative overheads and ease complicated and time-consuming integration tasks of widely dispersed (data) resources, quite a few solutions for collaborative data sharing and access have been designed and introduced in several European research projects for example in CoSpaces and ViroLab. These two projects basically concentrate on the development of collaborative working environments for different user communities such as engineering teams as well as health professionals with a particular focus on the integration of heterogeneous and large data resources into the system's infrastructure. In this paper, we present two approaches realised within CoSpaces and ViroLab to overcome the difficulties of integrating multiple data resources and making them accessible in a user-friendly but also secure way. We start with an analysis on systems' specifications describing user and provider requirements for appropriate solutions. Finally, we conclude with an outlook and give some recommendations how those systems can be further enhanced in order to guarantee a certain level of dynamicity, scalability, reliability, and last but not least security and trustworthiness. Title: A FRAMEWORK FOR PROTECTING EJB APPLICATIONS FROM MALICIOUS COMPONENTS Author(s): Hieu Vo and Masato Suzuki Abstract: Enterprise JavaBeans (EJB) components in an EJB application can be obtained from various sources. These components may be in-house developed or bought from other vendors. In the latter case, the source code of components is usually not available to application developers. The result is the application may contain malicious components. We propose a framework called BFSec that protects EJB applications from vicious components. The framework examines bean methods invoked by each thread in applications and compares them with pre-defined business functions to check whether the latest calls of threads are proper. Unexpected calls, which are considered as made by malicious components, will be blocked. Title: CONFIGURATION FRAGMENTS AS THE DNA OF SYSTEM AND CHANGE PROPERTIES - ARCHITECTURAL CHANGE OF COMPONENT-BASED AND SERVICE-ORIENTED SYSTEMS Author(s): D'Arcy Walsh Abstract: The notion of a Configuration Fragment is adopted to help address the challenge of managing the different kinds of dependencies that exist during the evolution of component-based and service-oriented systems. Based upon a model of Architectural Change and an example application-specific context, they are defined in order to express and reconcile change properties with respect to existing system properties. During system evolution, Configuration Fragments enable the configuration of Service and Service Protocol, Operation and Provided Service, Operation and Required Service, Operation and Operation, Operation and State Element, Operation and Composite Component, Component and Component, and Required Service and Provided Service dependencies. This occurs through the process of configuration leading to association, configuration leading to disassociation, or configuration leading to refinement of these system elements. Title: A FRAMEWORK FOR DATA CLEANING IN DATA WAREHOUSES Author(s): Taoxin Peng Abstract: There is a growing awareness that the high quality of data is a key to today’s business success. In recent years, data warehouse applications become more and more popular. It is a persistent challenge to achieve a high quality of data in data warehouses. It is estimated that as high as 75% of the effort spent on building a data warehouse can be attributed to back-end issues, such as readying the data and transporting it into the data warehouse. Among the tasks of readying data, data cleaning is crucial. To deal with this problem, a set of methods and tools has been developed. However, there are still at least two questions needed to be answered: How to improve the efficiency while performing data cleaning? How to improve the degree of automation when performing data cleaning? This paper challenges these two questions by presenting a novel framework, which provides an approach to managing data cleaning in data warehouses by focusing on the use of data quality dimensions, and decoupling a cleaning process into several sub-processes. Initial test run of the processes in the framework demonstrates that the approach presented is efficient and scalable for data cleaning in data warehouses. Title: DEVISA - CONCEPTS AND ARCHITECTURE OF A DATA MINING MODELS SCORING AND MANAGEMENT WEB SYSTEM Author(s): Diana Gorea Abstract: In this paper we describe DeVisa, a Web system for scoring and management of data mining models. The system has been designed to provide unified access to different prediction models using standard technologies based on XML. The prediction models are serialized in PMML format and managed using a native XML database system. The system provides functions such as scoring, model comparison, model selection or sequencing through a a web service interface. DeVisa also defines a specialized PMML query language named PMQL used for specifying client requests and interaction with PMML repository. The paper analyzes the system's architecture and functionality and discusses its use as a tool for researchers. Title: HESITANCY IN COMMITTING TO LARGE-SCALE ENTERPRISE SYSTEMS SOLUTIONS - EXPERIENCES AT A MULTI-NATIONAL CORPORATION Author(s): Chris Barry and Wojtek Sikorski Abstract: While early cited benefits of Enterprise Resource Planning (ERP) or enterprise systems remain for the most part highly desirable, it is often the case that the promise of delivery differs from reality. Many now agree that achieving enterprise systems benefits is complex, cumbersome, risky and expensive. Furthermore many ERP projects do not fully achieve expectations. This paper takes a critical lens to the prospect of firms achieving enterprise systems’ benefits and presents the findings of a case study that examines the underlying managerial and organizational reasons of one multi-national enterprise for, at least, delaying ERP implementation. It reveals a rich picture of implementation motivators, inhibitors and the perceived and real benefits of enterprise systems. Title: ITERATIVE XML SEARCH BASED ON DATA AND ASSOCIATED SEMANTICS Author(s): Alda Lopes Gançarski, Pedro Rangel Henriques and Flávio Xavier Ferreira Abstract: In a previous work in the context of information retrieval, XQuery was extended with an iterative paradigm. This extension helps the user getting the desired results from queries. In a related work, XQuery was also extended to allow the inclusion of SPARQL queries; this is useful when XML documents are associated with semantic RDF descriptions. However integrating SPARQL in XQuery queries makes the construction of queries more complex (although more powerful). To leverage this integration, we propose to apply the iterative paradigm to the ‘SPARQL extension to XQuery’. In the paper this proposal is introduced and justified and a case study is presented. Title: ADAPTATIVE MATCHING OF DATABASE WEB SERVICES EXPORT SCHEMAS Author(s): Daniela F. Brauner, Alexandre Gazola, Marco A. Casanova and Karin K. Breitman Abstract: This paper proposes an approach and a mediator architecture for adaptively matching export schemas of database web services. Differently from traditional mediator approaches, the mediated schema is constructed from the mappings adaptively elicited from user query responses. That is, query results are postprocessed to identify reliable mappings and build the mediated schema on the fly. The approach is illustrated with two case studies from rather different application domains. Title: COMPLEX EVENT PROCESSING FOR SENSOR BASED DATA AUDITING Author(s): Christian Lettner, Christian Hawel, Thomas Steinmaurer and Dirk Draheim Abstract: Current legislation demands organizations to responsibly manage sensitive data. To achieve compliance, data auditing must be implemented in information systems. In this paper we propose a data auditing architecture that creates data audit reports out of simple audit events at the technical level. We use complex event processing technology (CET) to obtain composed audit events out of simple audit events. In two scenarios we show how complex audit events can be built for business processes and application users, when one database user is shared between many application users, as found in multi-tier architectures. Title: AN APPROACH FOR SCHEMA VERSIONING IN MULTI-TEMPORAL XML DATABASES Author(s): Zouhaier Brahmia and Rafik Bouaziz Abstract: Schema evolution keeps only the current data and the schema version after applying schema changes. On the contrary, schema versioning creates new schema versions and preserves old schema versions and their corresponding data. These two techniques have been investigated widely, both in the context of static and temporal databases. With the growing interest in XML and temporal XML data as well as the mechanisms for holding such data, the XML context within which data items are formatted also becomes an issue. If much research work has recently focused on the problem of schema evolution in XML databases, less attention has been devoted to schema versioning in such databases. In this paper, we propose an approach for schema versioning in multi-temporal XML databases. This approach is based on the XML Schema language for describing XML schema, and is database consistency-preserving. Title: A METHODOLOGICAL APPROACH FOR MEASURING B2B INTEGRATION READINESS OF SMES Author(s): Spiros Mouzakitis, Aikaterini-Maria Sourouni, Fenareti Lampathaki and John Psarras Abstract: In the dawn of 21st century, companies are seeking ways to perform transactions efficiently and effectively. Enterprises must tackle B2B integration and adoption challenges in the short term in order to survive in such a competitive business environment of nowadays. However, most enterprises, and especially SMEs, lack the necessary technical and non-technical infrastructure as well the economic potential in order to efficiently adopt a B2B integration framework. This paper presents a methodological approach towards measuring the B2B integration readiness of Enterprises and the development of the software system to support it. Title: KNOWLEDGE MANAGEMENT IN INFORMATION SYSTEM DESIGN AND DELIVERY PROCESS - AN APPLICATION TO THE DESIGN OF A LEGAL INFORMATION SYSTEM Author(s): Ovidiu Vasutiu, Youssef Amghar, David Jouve and Jean-Marie Pinon Abstract: Nowadays information systems are more and more important for all types of organizations. To deal with the complex technologies available, IT specialists use proven best practices inspired from more comprehensive process frameworks for software and systems delivery or implementation and for effective project management. Methods developed to support theses processes produce many different heterogeneous resources (design documents and models, planning, project prototypes, etc...). Furthermore Information Systems need to follow, to adapt to a continuously changing reality. Designers will always have to consider new user and stakeholders requirements and go back to the starting design case for an update. The design cycle is then iterative. In this paper we present an organizational and technical infrastructure for a collaborative design process management system which automates mechanisms to assure the coherence and consistency of these continuously updated resources. Our approach uses document structuring, knowledge representation, and mechanisms for dependency analyzing and impact studies. Title: SIZE AND EFFORT-BASED COMPUTATIONAL MODELS FOR SOFTWARE COST PREDICTION Author(s): Efi Papatheocharous and Andreas S. Andreou Abstract: Reliable and accurate software cost estimations have always been a challenge especially for people involved in project resource management. The challenge is amplified due to the high level of complexity and uniqueness of the software process. The majority of estimation methods proposed fail to produce successful cost forecasting and neither resolve to explicit, measurable and concise set of factors affecting productivity. Throughout the software cost estimation literature software size is usually proposed as one of the most important attributes affecting effort and is used to build cost models. This paper aspires to provide size and effort-based estimations for the required software effort based on past historical projects data. The modeling approach utilises Artificial Neural Networks (ANN) with a random sliding window input method using holdout samples and moreover, a Genetic Algorithm (GA) undertakes to evolve the inputs and internal hidden architectures and to reduce the Mean Relative Error (MRE). The obtained optimal ANN topologies and input methods for each dataset are presented, discussed and compared to a classic MLR model. Title: WWW++ - ADDING WHY TO WHAT, WHEN AND WHERE Author(s): Paris Pennesi, Mark Harrison, Chaithanya Rao, Chien Yaw Wong and Srilakshmi Sivala Abstract: RFID technology can be used to its fullest potential only with software to supplement the hardware with powerful capabilities for data capture, filtering, counting and storage. The EPCglobal Network architecture encourages minimizing the amount of business logic embedded in the tags, readers and middleware. This creates the need for a Business Logic Layer above the event filtering layer that enhances basic observation events with business context - i.e. in addition to the (what, when, where) information about an observation, it adds context information about why the object was there. The purpose of this project is to develop an implementation of the Business Logic Layer. This application accepts observation event data (e.g. from the Application Level Events (ALE) standard interface), enriches them with business context and provides these enriched events to a repository of business-level events (e.g. via the EPC Information Services (EPCIS) capture interface). The strength of the application lies in the automatic addition of business context. It is quick and easy to adapt any business process to the framework suggested and equally easy to reconfigure it if the business process is changed. A sample application has been developed for a business scenario in the retail sector. Title: AN ONTOLOGY-BASED APPROACH FOR SEMANTIC INTEROPERABILITY IN P2P SYSTEMS Author(s): Deise de Brum Saccol, Rodrigo Perozzo Noll, Nina Edelweiss and Renata de Matos Galante Abstract: In peer-to-peer (P2P) systems, files from the same application domain are spread over the network. When the user poses a query, the processing relies mainly on the flooding technique, which is quite inefficient for optimization purposes. To solve this issue, our work proposes to cluster documents from the same application domain into super peers. Thus, files related to the same universe of discourse are grouped and the query processing is restricted to a subset of the network. The clustering task involves: ontology generation, document and ontology matching, and metadata management. The proposed mechanism implements the ontology manager in DetVX, an environment for detecting, managing and querying replicas and versions in a P2P context. Title: A MAPPING-DRIVEN APPROACH FOR SQL/XML VIEW MAINTENANCE Author(s): Vânia M. P. Vidal, Fernando C. Lemos, Valdiana S. Araújo and Marco A. Casanova Abstract: In this work we study the problem of how to incrementally maintain materialized XML views of relational data, based on the semantic mappings that model the relationship between the source and view schemas. The semantic mappings are specified by a set of correspondence assertions, which are simple to understand. The paper focuses on an algorithm to incrementally maintain materialized XML views of relational data. Title: USING ONTOLOGY META DATA FOR DATA WAREHOUSING Author(s): Alberto Salguero, Francisco Araque and Cecilia Delgado Abstract: One of the most complex issues of the integration and transformation interface is the case where there are multiple sources for a single data element in the enterprise Data Warehouse (DW). There are many facets due to the number of variables that are needed in the integration phase. However we are interested in the integration of temporal and spatial problem due to the nature of DWs. This paper presents our ontology based DW architecture for temporal integration on the basis of the temporal and spatial properties of the data and temporal characteristics of the data sources. The proposal shows the steps for the transformation of the native schemes of the data sources into the DW scheme and end user scheme and the use of an ontology model as the common data model. Title: MULTI-DIMENSIONAL MODELING - FORMAL SPECIFICATION AND VERIFICATION OF THE HIERARCHY CONCEPT Author(s): Ali Salem, Faiza Ghozzi and Hanene Ben-Abdallah Abstract: The quality of a data mart (DM) tightly depends on the quality of its multidimensional model. This quality dependence motivated several research efforts to define a set of constraints on the DM model/schema. Currently proposed constraints are either incomplete, or informally presented, which may lead to ambiguous interpretations. The work presented in this paper is a first step towards the definition of a formal framework for the specification and the verification of the quality of DM schemas. In this framework, the quality is expressed in terms of both the syntactic well-formedness of the DM schema as well as its semantic soundness with respect to the DM instances. More precisely, this paper first formalizes in Z the constraints pertinent to the hierarchy concept; the formalization is treated at the meta-model level. Secondly, the paper illustrates how the formalization can be instantiated and the constraints are verified for a particular sample model through the theorem prover Z\eves. Title: A PROTOCOL TO CONTROL REPLICATION IN DISTRIBUTED REAL-TIME DATABASE SYSTEMS Author(s): Anis Haj Said, Bruno Sadeg, Laurent Amanton and Béchir Ayeb Abstract: Data replication is often used in distributed systems to improve both availability and performance of applications accessing data. This is interesting for distributed real-time database systems since accessing data possibilities can help transactions to meet their deadlines. However such systems must ensure the maintaining of data copies consistency. To achieve this goal, distributed systems have to manage replication by implementing replication control protocols. In this paper, we discuss the contributions of data replication in distributed real-time database systems and we then present RT-RCP, a replication control protocol we designed for DRTDBS. We introduce a new entity called List of available copies (LAC) which is a list related to each data item in the database. A LAC of a data item contains all or a part of updated replicas references of this data item. These references are used by sites in order to access data at the appropriate sites. RT-RCP ensures data updates without affecting system performance and allows inconsistencies to happen but prevents access to stale data. Title: SCHEMA EVOLUTION IN WIKIPEDIA - TOWARD A WEB INFORMATION SYSTEM BENCHMARK Author(s): Carlo A. Curino, Hyun J. Moon , Letizia Tanca and Carlo Zaniolo Abstract: Evolving the database that is at the core of an Information System represents a difficult maintenance problem that has only been studied in the framework of traditional information systems. However, the problem is likely to be even more severe in web information systems, where open-source software is often developed through the contributions and collaboration of many groups and individuals. Therefore, in this paper, we present an in-depth analysis of the evolution history of the Wikipedia database and its schema; Wikipedia is the best-known example of a large family of web information systems built using the open-source software MediaWiki. Our study is based on: (i) a set of Schema Modification Operators that provide a simple conceptual representation for complex schema changes, and (ii) simple software tools to automate the analysis. This framework allowed us to dissect and analyze the 4.5 years of Wikipedia history, which was short in time, but intense in terms of growth and evolution. Beyond confirming the initial hunch about the severity of the problem, our analysis suggests the need for developing better methods and tools to support graceful schema evolution. Therefore, we briefly discuss documentation and automation support systems for database evolution, and suggest that the Wikipedia case study can provide the kernel of a benchmark for testing and improving such systems. Title: ERP IMPLEMENTATION CHALLENGES - VENDORS’ PERSPECTIVE Author(s): Jim Odhiambo Otieno Abstract: Enterprise Resource Planning (ERP) systems have transformed the way organizations go about the process of providing information systems. They promise to provide an off-the-shelf solution to the information needs of organizations. Despite that promise, implementation projects are plagued with much publicized failures and abandoned projects. Efforts to make ERP systems successful in organizations are facing challenges. The purpose of the study reported in this paper was to investigate the challenges faced by organisations implementing ERP systems in Kenya based on consultant point of view. Based on the factors identified from the interview, a survey was administered to ERP consultants from five Kenyan organisations that were identified as having a key role in ERP systems implementation in their firms in order to assess the criticality of the identified challenges. A factor analysis of these items identified six underlying dimensions. The findings of this study should provide to management of firms implementing ERP systems a better understanding of the likely challenges they may face and put in place appropriate measure to help in mitigating the risk of implementation failures. Title: TOWARDS A METHODOLOGY FOR MODELLING INTEROPERABILITY BETWEEN COLLABORATING ENTERPRISES Author(s): Anders Carstensen, Kurt Sandkuhl and Lennart Holmberg Abstract: In this paper we have described a collaboration study between two companies in a networked organisation. The main contribution is the connector view by which it is possible to model the collaboration without major changes in existing enterprise models, although the collaboration actually may effect several elements in the original model. Supporting objects are used to connect elements in the connector view to the original model, thereby establishing correspondencies between the conector view and the enterprise view. Title: USING A DATAWAREHOUSE TO EXTRACT KNOWLEDGE FROM ROBOCUP TEAMS Author(s): Isabel Gonzalez, Pedro Abreu and Luis Paulo Reis Abstract: RoboCup is a scientific and educational, international project that involves artificial intelligence, robotics and sport sciences. In these competitions, teams of all around the world participated in distinct leagues. In the begining of the Coach competition, one of RocoCup leagues,the goal of the researchers was to develop an agent (Coach that provides advices to teammates about how to act and with improve team performance. Using the resulting improved coach agent, with enhanced statistic calculation abilities, a huge amount of statistical data was gathered from the games held at Bremen 2006. This data was then stored and treated in a data warehouse system obtaining a good high level perspective/knowledge of the RoboCup simulated soccer tournament. According to the results, the team that represented our country, has a much more goal opportunities in comparison with the majority of the teams, but this team did not score many goals. In terms of more occupied regions, the best four teams in the tournament did not occupy many times the left and right wings, compared to others regions. In the future the our country team needs to develop new strategies that use these two areas preferrentialy in order to achieve better results. Title: THE MOTION PICTURE PARADIGM FOR MANAGING INFORMATION - A FRAMEWORK AND APPROACH TO SUPPORTING THE PLAY AND REPLAY OF INFORMATION IN COMPUTERISED INFORMATION SYSTEMS Author(s): Bing Wu, Kudakwashe Dube and Essam Mansour Abstract: The easy production of organisational reports that present information to management is an important goal of computerised information systems. The inability of these computerised systems to provide a continuous, on-the-fly and dynamic “information scene” for the review period continues to allow hidden trends in information and important questions to remain undetected. This paper presents a new paradigm, called the motion picture paradigm, for information management. This new paradigm addresses the above problem by through a perspective that views an organisation’s database and information systems as a mechanism for recording motion pictures of organisational information. The paper presents the key concepts that were developed for the new paradigm and then demonstrates that this paradigm can be realised through a comprehensive framework for the multi-dimensional management of information for complex domains and the use of existing information technologies. The results of preliminary experience, involving the computerised management of clinical practice guideline and electronic healthcare record information were obtained. The experience reveal that the motion picture paradigm facilitates, at any time, the easy and comprehensive review of information in a way that allows developments to be grasped easily and possibilities of the detection of hidden trends and the generation of ground-breaking questions to be enhanced. Title: A STUDY OF THE SPATIAL REPRESENTATION IN MULTIDIMENSIONAL MODELS Author(s): Concepción M. Gascueña and Rafael Guadalupe Abstract: This work presents a study on the handling of multiple spatio-temporal granularities in Data Warehouses or Multidimensional Databases (MDB). Are used these technologies in the Decision Support Systems and in Geographic Information Systems, the latter locates spatial data on the Earth’s surface and studies its evolution through time. The possibility of to store the spatial data with multiple granularities in databases, allow us to study these data with multiple representations and clarifies the understanding of the data analysis subject. This paper presents a conceptual multidimensional model called FactEntity (FE) to model MDB, this model adds new definitions, constructors, and hierarchical structures, to deal with the spatial and temporal multi-granularities under the multidimensional paradigm. In addition, the FE model show and define some new concepts as Basic factEntity and Virtual factEntities, and the way of to derive the data in order to make up these Virtual factEntities. This study distinguishes two spatial granularity types, which we called: geometric granularity and semantic granularity and in order to handle them are proposed three new types of hierarchies: Dynamic, Static and Hybrid. We do an analysis on the behaviour of spatial data, with multiple granularities, interacting with other spatial and thematic data. There is no multidimensional model that allows gathering of so much semantics, which our model proposes. Title: TOWARDS A METHOD FOR ENTERPRISE INFORMATION SYSTEMS INTEGRATION Author(s): Rafael Silveira, Joan Pastor and Enric Mayol Abstract: Enterprise information systems integration is essential for organizations to fulfil interoperability requirements between applications and business processes. To carry out most typical integration requirements, traditional software development methodologies are not suitable. Neither are enterprise package implementation methodologies. Thus, specific ad-hoc methodologies are needed for information systems integration. This paper proposes a new methodology for enterprise information systems integration that facilitates continuous learning and centralized management during all the integration process. This methodology has been developed based on the integration experience gained in a real case, which is shortly described. Title: SODDA – A SERVICE-ORIENTED DISTRIBUTED DATABASE ARCHITECTURE Author(s): Breno Mansur Rabelo and Clodoveu Augusto Davis Jr. Abstract: The increasing dissemination of information systems distributed over computer networks has reinforced the interest on distributed database manage¬ment systems (DDBMS). A review on the architecture of such systems is currently motivated by the wide availability of networking resources, especially the Internet, through which the cost of communication among the nodes of a distributed database can be reduced. Besides, the coming of age of syntactic interoperability standards, such as XML, and of service-based networking allow for new alternatives for the implementation and deployment of distributed data¬bases. This paper presents a review of the classical distributed database management architecture from a technological standpoint, suggesting its use in the context of spatial data infrastructures (SDI). The paper also proposes the adoption of elements from service-oriented architectures for the implementation of the connections among distributed database components, thus configuring a service-oriented distributed database architecture. Title: LANGUAGE EXTENSIONS FOR THE AUTOMATION OF DATABASE SCHEMA EVOLUTION Author(s): George Papastefanatos, Panos Vassiliadis, Alkis Simitsis, Konstantinos Aggistalis, Fotini Pechlivani and Yannis Vassiliou Abstract: The administrators and designers of modern Information Systems face the problem of maintaining their systems in the presence of frequently occurring changes in any counterpart of it. In other words, when a change occurs in any point of the system –e.g., source, schema, view, software construct– they should propagate the change in all the involved parts of the system. Hence, it is imperative that the whole process should be done correctly, i.e., the change should be propagated to all the appropriate points of the system, with a limited overhead imposed on both the system and the humans, who design and maintain it. In this paper, we are dealing with the problem of evolution in the context of databases. First, we present a coherent, graph-based framework for capturing the effect of potential changes in the database software of an Information System. Next, we describe a generic annotation policy for database evolution and we propose a feasible and powerful extension to the SQL language specifically tailored for the management of evolution. Finally, we demonstrate the efficiency and feasibility of our approach through a case study based on a real-world situation occurred in the Greek public sector. Title: IMPLEMENTING THE DATA ACCESS OBJECT PATTERN USING ASPECTJ Author(s): André Luiz de Oliveira, André Luis Andrade Menolli and Ricardo Gonçalves Coelho Abstract: Due to the constant access and storage information need, there is a constant concern for implementing these functionalities in large part of the current developed applications. Most of these applications use the Data Access Object pattern to implement those functionalities, once this pattern makes possible the separation of the data access code of the application code. However its implementation exposes the data access object to the others application objects, causing situations in witch a business object access the data access object. With the objective of solving this problem, the present paper proposes an oriented implementation of the aspects of this pattern, followed by a quantitative evaluation of both, object oriented (OO) and aspect oriented (AO), implementations of this pattern. This study used strong software engineering attributes such as, separation of interests, coupling and cohesion, as evaluation criteria. Title: K-MEANS BASED APPROACH FOR OLAP DIMENSION UPDATES Author(s): Fadila Bentayeb Abstract: Actual data warehouses models usually consider OLAP dimensions as static entities. However, in practice, structural changes of dimensions schema are often necessary to adapt the multidimensional database to changing requirements. This article presents a new structural update operator for OLAP dimensions. This operator can create a new level to which, a pre-existent level in an OLAP dimension hierarchy rolls up. To define the domain of the new level and the aggregation function from an existing level to the new level, our operator classifies all instances of an existing level into k clusters with the k-means clustering algorithm. To choose features for k-means clustering, we propose two solutions: the first solution uses descriptors of the pre-existent level in its dimension table. On the other hand, the second solution proposes to describe the level by measures attributes in the fact table. As data warehouses are very large databases, these solutions were integrated inside a RDBMS: the Oracle database system. In addition, we carried out some experimentations which validated the relevance of our approach. Title: STUDY OF CHALLENGES AND TECHNIQUES IN LARGE SCALE MATCHING Author(s): Sana Sellami, Aicha-Nabila Benharkat, Youssef Amghar and Rami Rifaieh Abstract: Matching Techniques are becoming very attractive research topic. With the development and the use of a large variety of data (e.g. DB schemas, ontologies, taxonomies), in many domains (e.g. libraries, life science, etc), Matching Techniques are called to overcome the challenge of aligning and reconciling theses ifferent interrelated representations. In this paper, we are interested in studying large scale matching approaches. We define a quality of Matching (QoM) that can be used to evaluate large scale Matching systems. We survey the techniques of large scale matching, when a large number of schemas/ontologies and attributes are involved. We attempt to cover a variety of techniques for schema matching called Pair-wise and Holistic, as well as a set of useful optimization techniques. One can acknowledge that this domain is on top of effervescence and Large scale matching need much more advances. So, we propose a contribution that deals with the creation of a hybrid approach that combines these techniques. Title: CONCEPTUAL UNIVERSAL DATABASE LANGUAGE (CUDL) AND ENTERPRISE MEDICAL INFORMATION SYSTEMS Author(s): Nikitas N. Karanikolas, Christos Skourlas, Maria Nitsiou and Emmanuel J. Yannakoudakis Abstract: Today, there is an increased trend for Electronic Patient Records (EPR) incorporating and correlating heterogeneous information imported from various sources and from different medical applications. New possibilities are also given by the rapid technological progress and the development of independent software applications and tools that handle multimedia medical data. Moreover, users (e.g. doctors, nurses) often prefer to use general purpose software (e.g. word processors) and specific applications and tools for organizing and accessing medical data and they only partially use Hospital Information Systems (HIS). Therefore, it is necessary for the HIS to provide the capability for encapsulating in their EPR externally created information. Dynamic evolution of the HIS must also be supported by flexible Database schemata. In this paper, we conclude that modern HIS should be designed and implemented using database management systems offering new enhanced database models and manipulation languages. Eventually, we describe and discuss how to use the Frame Database Model (FDB) and the new Conceptual Universal Database Language (CUDL) for supporting dynamic schema evolution and covering the needs of the users. Title: TRANSACTIONAL SUPPORT IN NATIVE XML DATABASES Author(s): Theo Härder, Sebastian Bächle and Christian Mathis Abstract: Apparently, everything that can be said about concurrency control and recovery is already said. Nevertheless, the XML model opens new issues for the optimization of transaction processing. In this position paper, we report on our current view concerning XML transaction optimization. We explore aspects of fine-grained transaction isolation using tailor-made lock protocols. Furthermore, we outline XML storage techniques where storage representation and logging can be minimized in specific application scenarios. Title: GREEN COMPUTING - A CASE FOR DATA CACHING AND FLASH DISKS? Author(s): Karsten Schmidt, Theo Härder, Joachim Klein and Steffen Reithermann Abstract: Green computing or energy saving when processing information is primarily considered a task of processor development. However, this position paper advocates that a holistic approach is necessary to reduce power consumption to a minimum. We discuss the potential of integrating NAND flash memory into DB-based architectures and its support by adjusted DBMS algorithms governing IO processing. The goal is to drastically improve energy efficiency while comparable performance as in disk-based systems is maintained. Title: A STORE OF JAVA OBJECTS ON A MULTICOMPUTER Author(s): Mariusz Bedla and Krzysztof Sapiecha Abstract: The research deals with Object Oriented versions of Scalable Distributed Data Structures (OOSDDS) to store Java objects in serialized form. OOSDDS can be used as a part of distributed object store. In the paper an architecture for object version of RP* is introduced and its implementation for Java objects is given. Finally performance of the new architecture is evaluated. Title: SUCCINCT ACCESS CONTROL POLICIES FOR PUBLISHED XML DATASETS Author(s): Tomasz Müldner, Jan Krzysztof Miziołek and Gregory Leighton Abstract: We consider the setting of secure publishing of XML documents, in which read-only access control policies (ACPs) over static XML datasets are enforced using cryptographic keys. The role-based access control (RBAC) model provides a flexible method for specifying such policies. Extending the RBAC model to include role parameterization addresses the problem of role proliferation which can occur in large scale systems. In this paper, we describe the complete design of a parameterized RBAC (PRBAC) model for XML documents. We also detail algorithms for generating the minimum number of keys required to enforce an arbitrary PRBAC policy; for distributing to each user only keys needed for decrypting accessible nodes; and for applying the minimal number of encryption operations to an XML document required to satisfy the protection requirements of the policy. We also show that the time complexity of our approach is linear w.r.t. document size and the number of roles. This is a position paper presenting work in progress. Title: SCHEMA MAPPING FOR RDBMS Author(s): Calin-Adrian Comes, Ioan Ovidiu Spatacean, Daniel Stefan, Beatrice Stefan, Lucian-Dorel Savu, Vasile Paul Bresfelean and Nicolae Ghisoiu Abstract: Schema mapping is a specification that describes how data structured from one schema S the source schema is to be transformed into data structured under schema T, the target schema. Schemata S and T without triggers and/or stored procedures(functions and procedures) are statical. In this article, we propose a Schema Mapping Model specification that describes the conversion of a Schema Model from one Platform-Specific Model to other Platform-SpecificModel according toMeta-Object Facility-Query/Verify/Transformin dynamical mode. Area 2 - Artificial Intelligence and Decision Support Systems Title: USING CASE-BASED REASONING TO EXPLAIN EXCEPTIONAL CASES Author(s): Rainer Schmidt and Olga Vorobieva Abstract: In medicine many exceptions occur. In medical practice and in knowledge-based systems too, it is necessary to consider them and to deal with them appropriately. In medical studies and in research, exceptions shall be explained. We present a system that helps to explain cases that do not fit into a theoretical hypothesis. Our starting points are situations where neither a well-developed theory nor reliable knowledge nor a case base is available at the beginning. So, instead of reliable theoretical knowledge and intelligent experience, we have just some theoretical hypothesis and a set of measurements. In this paper, we propose to combine Case-Based Reasoning with a statistical model. We use Case-Based Reasoning to explain those cases that do not fit the model. The case base has to be set up incrementally, it contains the exceptional cases, and their explanations are the solutions, which can be used to help to explain further exceptional cases. Title: A NEW APPROXIMATE REASONING BASED ON SPMF Author(s): Dae-Young Choi Abstract: A new approximate reasoning based on standardized parametric membership functions (SPMF) is proposed. It provides an efficient mechanism for approximate reasoning within linear time complexity. Thus, it can be used to improve the speed of approximate reasoning. Title: AN ENHANCED SYSTEM FOR PATTERN RECOGNITION AND SUMMARISATION OF MULTI-BAND SATELLITE IMAGES Author(s): Hema Nair Abstract: This paper presents an enhanced system developed in Java® for pattern recognition and pattern summarisation in multi-band (RGB) satellite images. Patterns such as island, land, water body, river, fire, urban settlement in such images are extracted and summarised in linguistic terms using fuzzy sets. Some elements of supervised classification are utilised in the system to assist in the development of linguistic summaries. Results of testing the system to analyse and summarise patterns in SPOT MS images and LANDSAT images are also discussed. Title: TEMPORAL INFORMATION INDEXING MODEL Author(s): Witold Abramowicz and Andrzej Bassara Abstract: Modern information retrieval models are not capable of resolving queries containing temporal criteria. One is not able to search for documents which content relates to certain time periods (for instance ,,find all documents related to the third quarter of the last year). This limitation is mainly due to syntactic nature of modern information retrieval models, which performs query-document matching based on syntactic or simplified semantic similarity measures. Information Retrieval models consist of following components: documents, information needs, documents representation in form of indices, information needs representation in form of queries, and relevance matching components, which matches queries against indices. In this article we focus on the problem of creating a documents representations, which represent time to which indexed documents relate. This in turn will allow to issue queries containing temporal criteria. Title: MULTI-AGENT AND EMBEDDED SYSTEM TECHNOLOGIES FOR AUTOMATIC SURVEILLANCE Author(s): M. C. Romero, F. Sivianes, A. Carrasco, M. D. Hernández and J. I. Escudero Abstract: Supervisory Control and Data Acquisition (SCADA) systems have traditionally used text-based Human Machine Interfaces (HMI). We propose a system which integrates multimedia information in SCADA systems in order to improve and support the telecontrol tasks. This system has been deployed in a real environment and we have obtained satisfactory results. Then, we also propose an improvement for this system. This improvement allows telecontrol operators to use the system without needing any experience with computers and also it allows an automatic surveillance of the elements in the utility environment. The development of this improved system is accomplished by using the main advantages of embedded and multi-agent system technologies. Title: THE SWARM EFFECT MINIMIZATION ALGORITHM - UTILIZED TO OPTIMISE THE FREQUENCY ASSIGNMENT PROBLEM Author(s): Grant Blaise O’Reilly and Elizabeth Ehlers Abstract: The swarm effect minimization algorithm (SEMA) is presented in this paper. The SEMA was used to produce improved solutions for the minimum interference frequency assignment problem (MI-FAP) in mobile telecommunications networks. The SEMA is a multi-agent orientated design. The SEMA is based on the stigmergy concept. The stigmergy concept allows the actual changes in the environment made by entities in a swarm to act as a source of information that aids the swarm entities when making further changes in the environment. The entities do not blindly control the changes in the environment the actual changes guide the entities. The SEMA is tested against the COST 259 Siemens bench marks as well as tested in a commercial mobile telecommunications network and the results are presented in this paper. Title: ALGORITHMS FOR AI LOGIC OF DECISIONS IN MULTI-AGENT ENVIRONMENT Author(s): Vladimir Rybakov and Sergey Babenyshev Abstract: The paper considers a temporal multi-agent logic LD_{m,a} (with interacting agents), imitating taking of decision by access to knowledge by agent's interaction. The interaction is modeled by possible communication channels between agents in temporal Kripke/Hintikka like models. The logic LD_{m,a} distinguishes local and global decision and is based on temporal Kripke/Hintikka models with agents accessibility relations distributed in states of all possible time clusters. The main result provides decision algorithm for LD_{m,a} (so, we prove that LD_{m,a} is decidable) which also serves the satisfiability problem. Title: BUILDING A DECISION SUPPORT SYSTEM FOR STUDENTS BY USING CONCEPT MAPS Author(s): Dumitru Dan Burdescu, Marian Cristian Mihaescu and Bogdan Logofatu Abstract: Concept maps are an effective way of representing organized knowledge (concepts) in hierarchical fashion regarding a person’s understanding of a domain of knowledge. Within our custom developed e-Learning platform it was created a concept map for a chapter of a discipline. The obtained concept map has been used for creation the test and exam questions such that knowledge regarding each concept is tested by a certain number of quizzes. We present an architecture of a decision support system that assesses the accumulated knowledge of students. The architecture’s business logic is based on a concept map of a chapter of a discipline. A custom algorithms has been designed and implemented to measure the coverage of the curriculum. The system may be generalized for entire discipline as long as for each chapter is set up a concept map and all other necessary settings. Title: RECOGNITION OF VEHICLE NUMBER PLATES Author(s): Ondrej Martinsky Abstract: This work deals with problematic from field of artificial intelligence, machine vision and neural networks in construction of an automatic number plate recognition system (ANPR). This problematic includes mathematical principles and algorithms, which ensure a process of number plate detection, processes of proper characters segmentation, normalization and recognition. Work comparatively deals with methods achieving invariance of systems towards image skew, translations and various light conditions during the capture. Work also contains an implementation of a demonstration model, which is able to proceed these functions over a set of snapshots. Title: ENTERPRISE INFORMATION RETRIEVAL: A SURVEY Author(s): Hamid Turab Mirza Abstract: Efficient retrieval of the relevant information is a critical success factor for many enterprises. Despite of all the advancement in the web search technology, enterprise searching is still faced with many challenges and problems. Boundaries of the enterprise search are broad and expectations of the users are quite high, in addition to many challenges faced one of the major problems is the difference between the nature of web and enterprise searching. Many solutions have been proposed and techniques have been devised to improve the enterprise search, but still effective enterprise searching is a challenge for the researchers and the commercial companies, however it is realized that the solution for which will deliver enormous benefits. Title: A DECISION SUPPORT SYSTEM FOR FACILITY LOCATION SELECTION BASED ON A FUZZY HOUSE OF QUALITY METHOD Author(s): R. Tavakkoli-Moghaddam and S. Hassanzadeh-Amin Abstract: Companies investigate decision supports systems (DSSs) for facility location selection to reduce cost and manage risk. In this paper, a decision support system for location selection is proposed based on a house of quality (HOQ) method, adopting an analysis to fuzzy logic and triangular fuzzy numbers. Special attention is also paid to the subjective assessment in the HOQ concept. Further, the differences between decision makers are taken into account. Finally, a case study is presented to demonstrate the procedure of the proposed algorithm and identify the suitable location. Title: COMBINING INDEXING METHODS AND QUERY SIZES IN INFORMATION RETRIEVAL IN FRENCH Author(s): Désiré Kompaoré, Josiane Mothe and Ludovic Tanguy Abstract: This paper analyses three type of different indexing methods applied on French test collections (CLEF from 2000 to 2005): lemmas, truncated terms and single words. The same search engine and the same characteristics are used independently to the indexing method to avoid variability in the analysis. When evaluated on French CLEF collections, indexing by lemmas is the best method compared to single words and truncated term methods. We also analyse the impact of combining indexing methods by using the CombMNZ function. As CLEF topics are composed of different parts, we also examine the influence of these topic parts by comparing the results when topic parts are considered individually, and when they are combined. Finally, we combine both indexing methods and query parts. We show that MAP can be improved up to 8% compared to the best individual methods Title: ANOMALY DETECTION ALGORITHMS IN BUSINESS PROCESS LOGS Author(s): Fábio Bezerra and Jacques Wainer Abstract: In some domains of application, like software development and health care processes, a normative business process system (e.g. workflow management system) is not appropriate because a flexible support is needed to the participants. On the other hand, while it is important to support flexibility of execution in these domains, security requirements can not be met whether these systems do not offer extra control, which characterizes a trade off between flexibility and security in such domains. This work presents and assesses a set of anomaly detection algorithms in logs of Process Aware Systems (PAS). The detection of an anomalous instance is based on the noise'' which an instance makes in a process model discovered by a process mining algorithm. As a result, a trace that is an anomaly for a discovered model will require more structural changes for this model fit it than a trace that is not an anomaly. Hence, when aggregated to PAS, these methods can support the coexistence of security and flexibility. Title: A NEW LEARNING ALGORITHM FOR CLASSIFICATION IN THE REDUCED SPACE Author(s): Luminita State, Catalina Cocianu, Ion Rosca and Panayiotis Vlamos Abstract: The aim of the research reported in the paper was twofold: to propose a new approach in cluster analysis and to investigate its performance, when it is combined with dimensionality reduction schemes. Our attempt is based on group skeletons defined by a set of orthogonal and unitary eigen vectors (principal directions) of the sample covariance matrix. Our developments impose a set of quite natural working assumptions on the true but unknown nature of the class system. The search process for the optimal clusters approximating the unknown classes towards getting homogenous groups, where the homogeneity is defined in terms of the “typicality” of components with respect to the current skeleton. Our method is described in the third section of the paper. The compression scheme was set in terms of the principal directions corresponding to the available cloud. The final section presents the results of the tests aiming the comparison between the performances of our method and the standard k-means clustering technique when they are applied to the initial space as well as to compressed data. Title: SUBJECTIVE PREFERENCES IN FINANCIAL PRODUCTS Author(s): Emili Vizuete Luciano and Anna Mª Gil Lafuente Abstract: When the businessman it makes financial investments in the banking organizations, him is faced with the need to choose between apparently different products but which, when all is said and done, are very similar. The financial advisers have to offer an agile and well-qualified service to be able to continue counting on the confidence of their clients and to increase their results consequently. The new situation which we faced cannot be treated by means of the application of conventional models, since we were in the total uncertainty Title: INTERNAL FRAUD RISK REDUCTION - RESULTS OF A DATA MINING CASE STUDY Author(s): Mieke Jans, Nadine Lybaert and Koen Vanhoof Abstract: Corporate fraud these days represents a huge cost to our economy. Academic literature already concentrated on how data mining techniques can be of value in the fight against fraud. All this research focusses on fraud detection, mostly in a context of external fraud. In this paper we discuss the use of a data mining approach to reduce the risk of internal fraud. Reducing fraud risk comprehends both detection and prevention, and therefore we apply a descriptive data mining technique as opposed to the widely used prediction data mining techniques in the literature. The results of using a latent class clustering algorithm to a case company’s procurement data suggest that applying this approach of descriptive data mining is useful in assessing the current risk of internal fraud. Title: AN OBJECT SELECTION MECHANISM FOR SCHEMA INTEGRATION OF AGENT’S KNOWLEDGE STRUCTURE IN VIRTUAL REALITY Author(s): Dong-Hoon Kim and Jong-Hee Park Abstract: Similar to human knowledge, the knowledge of agents should be able to express various and vast information in the virtual reality. In order to represent the numerous information we should be constructed the lots of schemas. Such a reason, the schemas are represented redundantly. That is brought about the problems such as update and insertion anomalies. In order to solve the problem we should be considered the method of schema integration. In this paper, we will propose the methods of selecting object which are suitable for the schema integration. Title: FORECASTING WITH ARTMAP-IC NEURAL NETWORKS - AN APPLICATION USING CORPORATE BANKRUPTCY DATA Author(s): Anatoli Nachev Abstract: Financial diagnosis and prediction of corporate bankruptcy can be viewed as a pattern recognition problem. This paper proposes a novel approach to solution based on ARTMAP-IC - a general-purpose neural network system for supervised learning and recognition. For a popular dataset, with proper preprocessing steps, the model outperforms similar techniques and provides prediction accuracy equal to the best one obtained by a backpropagation MLPs. An advantage of the proposed model over the MLPs is the short online learning, fast adaptation to novel patterns and scalability. Title: A STOCHASTIC APPROACH FOR PERFORMANCE ANALYSIS OF PRODUCTION FLOWS Author(s): Philippe Bouché and Cecilia Zanni Abstract: In our increasingly competitive world, today companies are implementing improvement strategies in every department and, in particular, in their manufacturing systems. This paper discusses the use of a global method based on a knowledge-based approach for the development of a software tool for modelling and analysis of production flows. This method will help promote the companies competitiveness by guaranteeing the efficiency of their production lines and, therefore, the quality and traceability of the manufactured products. Different kind of techniques will be used: graphic representation of the production, identification of specific behaviours, and research of correlations among events on the production line. Most of these techniques are based on statistical and probabilistic analyses. To carry on high level analyses, a stochastic approach will be used in way to identify specific behaviour to aims defining action plans, etc... Title: DISTRIBUTED ENSEMBLE LEARNING IN TEXT CLASSIFICATION Author(s): Catarina Silva, Bernardete Ribeiro, Uroš Lotrič and Andrej Dobnikar Abstract: In today's society, individuals and organizations are faced with an ever growing load and diversity of textual information and content, and with increasing demands for knowledge and skills. Coping with these demands requires cutting-edge learning techniques. In this work we try to answer part of these challenges by addressing text classification problems, essential to managing knowledge, by combining several different pioneer kernel-learning machines, namely Support Vector Machines and Relevance Vector Machines. To excel complex learning procedures we establish a model of high-performance distributed computing environment to help tackling the tasks involved in the text classification problem. The presented approach is valuable in many practical situations where text classification is used in search engines, either in the form of pre-classification, e.g., engines providing topic directories manually created by human experts; or post-classification, e.g. engines providing automated classification of query results. While the former increases precision the latter enhances the presentation of results. Benchmark data sets REUTERS-21578 and RCV1 are used to demonstrate the strength of the proposed system while different ensemble based learning machines provide text classification models that are efficiently deployed in the Condor and Alchemi platforms. Title: A GLOBAL MODEL OF SEQUENCES OF DISCRETE EVENT CLASS OCCURRENCES Author(s): Philippe Bouché, Marc Le Goc and Jérome Coinu Abstract: This paper proposes a global model of a sequence of alarms that are generated by knowledge based system monitoring a dynamic process. The modelling approach is based on the Stochastic Approach to discover timed relations between discrete event classes from the representation of a set of sequences under the dual form of a homogeneous continuous time Markov chain and a superposition of Poisson processes. An abductive reasoning on these representations allows discovering chronicle models that can be used as diagnosis rules. Such rules subsume a temporal model called the average time sequence that to sum up the initial set of sequences. This paper presents this model and the role it play in the analysis of an industrial application monitored with a network of industrial automata. Title: ASSESSMENT OF THE EFFECT OF NOISE ON AN UNSUPERVISED FEATURE SELECTION METHOD FOR GENERATIVE TOPOGRAPHIC MAPPING Author(s): Alfredo Vellido and Jorge S. Velazco Abstract: Unsupervised feature relevance determination and feature selection for dimensionality reduction are important issues in many clustering problems. An unsupervised feature selection method for general Finite Mixture Models was recently proposed and subsequently extended to Generative Topographic Mapping(GTM), a nonlinear manifold learning constrained mixture model that provides data clustering and visualization. Some of the results of a previous preliminary assessment of this method for GTM suggested that its performance may be affected by the presence of uninformative noise in the dataset. In this brief study, we test in some detail such limitation of the method. Title: LOCAL SEARCH AS A FIXED POINT OF FUNCTIONS Author(s): Eric Monfroy, Frédéric Saubion, Broderick Crawford and Carlos Castro Abstract: Constraint Satisfaction Problems (CSP) provide a general framework for modeling many practical applications (planning, scheduling, time tabling, ...). CSPs can be solved with complete methods (e.g., constraint propagation), or incomplete methods (e.g., local search). Although there are some frameworks to formalize constraint propagation, there are only few studies of theoretical frameworks for local search. We are here concerned with the design of a generic framework to model local search as the computation of a fixed point of functions. This work allows one to simulate standard strategies used for local search, and to easily design new strategies in a uniform framework. Title: APPLYING MULTI-AGENT SYSTEMS TO ORGANIZATIONAL MODELLING IN INDUSTRIAL ENVIRONMENTS Author(s): M. C. Romero, R. M. Crowder, Y. W. Sim and T. R. Payne Abstract: This paper considers an agent-based approach to organizational modelling within the engineering design domain. The interactions between individual designers within a design teams has a significant impact upon how well a task can be performed, and hence the quality of the resultant product, hence many organisations wish to model, and hence fully understand the process. Using multi-agent social modelling, designers and the design task attributes can be the subject of rules implying how well tasks can be performed given different levels of these attributes. In this paper we discuss the background to the work and the identifications of individual, and team variables. Title: A KNOWLEDGE-BASED PERFORMANCE MEASUREMENT SYSTEM FOR IMPROVING RESOURCE UTILIZATION Author(s): Annie C. Y. Lam, S. K. Kwok and W. B. Lee Abstract: In current manufacturing industry, there are various challenges including short product life cycle, process automation and global competition. It is critical for the manufacturing companies to ensure effective utilization of production assets for overall business success. In order to focus scarce resources on areas that have the greatest impact on productivity, performance evaluation on the resource allocation is necessary to assist companies improving the resource utilization and accomplishing their objectives. In this paper, a knowledge-based performance measurement system (KPMS) is designed to evaluate the resource allocation decisions and provide recommendations to improve the performance and physical asset utilization. The framework of the proposed system, which is constructed by rule-based reasoning, case-based reasoning and a mathematical model, is introduced. By integrating the mathematical model with knowledge rules, performance indicators that are associated with the achievement of company objectives can be determined to quantify the performance of the resource allocation. Moreover, case-based reasoning technique is adopted to evaluate the performance and reuse the experience in past cases to provide recommendations for improvement. Title: THE PROTÉGÉ - PROMETHEUS APPROACH TO SUPPORT MULTI-AGENT SYSTEMS CREATION Author(s): Marina V. Sokolova and Antonio Fernández-Caballero Abstract: The integration of two existing and widely accepted tools, Protégé Ontology Editor and Knowledge-Base Framework, and Prometheus Development Kit, into a common approach, aiming to include the principal stages of MAS development life cycle and offering a general sequence of steps facilitating application creation, is proposed in this paper. The approach is successfully being applied to situation assessment issues, which has concluded in an agent-based decision-support system for environmental impact evaluation. Title: A MODEL TO RATE TRUST IN COMMUNITIES OF PRACTICE Author(s): Javier Portillo-Rodriguez, Juan Pablo Soto Aurora Vizcaino and Mario Piattini Abstract: Communities of Practice are an important centre of knowledge exchange in which feelings such as membership or trust play a significant role since both are the basis for a suitable sharing of knowledge. However, current Communities of Practice are often “virtual” as their members may be geographically distributed. This makes it more difficult for a feeling of trust to take place. In this paper we describe a trust model designed to help software agents, which represent communities of practice members, to rate how trustworthy a knowledge source is. It is important to clarify that we also consider members as knowledge sources since, in fact, they are the most important knowledge providers. Title: K-SITE RULES - INTEGRATING BUSINESS RULES IN THE MAINSTREAM SOFTWARE ENGINEERING PRACTICE Author(s): José L. Martínez-Fernández, José C. González and Pablo Suárez Abstract: The technology for business rule based systems faces two important challenges: standardization and integration within conventional software development lifecycle models and tools. Despite the standardization effort carried out by international organizations, commercial tools incorporate their own flavours in rule languages, making difficult the migration among tools. On the other hand, although some business rules systems vendors incorporate interfaces to encapsulate decision models as web services, it is still difficult integrating business rules in traditional object-oriented analysis and design methodologies. This is the rationale behind the development of K-Site Rules, a tool that facilitates the cooperation of business people and software designers in business applications. Title: IMPROVING CASE RETRIEVAL PERFORMANCE THROUGH THE USE OF CLUSTERING TECHNIQUES Author(s): Paulo Tomé, Ernesto Costa and Luís Amaral Abstract: The performance of Case-Based Reasoning (CBR) systems is highly depend on the performance of the retrieval phase. Usually, if the case memory has a large number of cases the system turn to be very slow. Several mechanisms have been proposed in order to prevent a full search of the case memory during the retrieval phase. In this work we propose a clustering technique applied to the memory of cases. But this strategy is applied to an intermediate level of information that defines paths to the cases. Algorithms to the retrieval and retention phase are also presented. Title: COMPARING PEOPLE IN THE ENTERPRISE Author(s): Gianluca Demartini Abstract: Enterprise Search Systems are requested to provide more and more functionalities for supporting decision at the management level. An important aspect to consider is the human power and the knowledge which is available. For this reason, in this paper we present an improved approach to compare experts in order to retrieve and present to the user the most appropriate candidates for a given project. We also propose an evaluation framework which will enable a fair comparison among Enterprise Search Systems (ESSs) facilitating the choice among several available products. Title: RULE EVOLUTION APPROACH FOR MINING MULTIVARIATE TIME SERIES DATA Author(s): Viet-An Nguyen and Vivekanand Gopalkrishnan Abstract: In the last two decades, time series data mining has been an attractive topic of numerous research, most of which only focus on the time series of a single attribute. However, in many real world problem, the data come in as the time series of multiple attributes among which there are some correlations. In this paper, we present a novel approach to model and predict the behaviors of multiple attributes time series data based on the idea of rule evolution. Our approach is divided in to separate steps, each of which can be accomplished by several machine learning techniques. This makes our system highly flexible, configurable and extendable. Experiments are performed on real-world database, and the results demonstrate that our system has substantial estimation reliability and prediction accuracy. Title: A JOINT OPTIMIZATION ALGORITHM FOR DISPATCHING TASKS IN AGENT-BASED WORKFLOW MANAGEMENT SYSTEMS Author(s): Pavlos Delias, Anastasios Doulamis and Nikolaos Matsatsinis Abstract: Workflow problems generally require the coordination of many workers; machines and computers. Agents provide a natural mechanism for modelling a system where multiple actors operate, but they do not explicitly support coordination schemes. Efficient task allocation to these actors is a fundamental coordination prerequisite. A competent allocation policy should address both system performance issues and users’ quality demands. Since these factors are often contradictory, an efficient solution is hard to be identified. In this study, we suggest a task delegation strategy that jointly optimizes system performance (as expressed by workload balancing) and quality demands (as expressed by minimum task overlapping). A consistent modelling approach allows us to transform data of both these factors into a matrix format. The next step is to exploit the Ky-fan theorem and the notion of generalized eigenvalues to optimally solve the task allocation problem. A simple scheduling policy and an experimental setup were applied to test the efficiency of the proposed algorithm. Title: SIFT APPROACH FOR BALL RECOGNITION IN SOCCER IMAGES Author(s): M. Leo, T. D’Orazio, N. Mosca and A. Distante Abstract: In this paper a new method for ball recognition in soccer images is proposed. It detects the ball position in each frame but, differently from related previous approaches, it does not require a long and tedious phase to built different positive training sets in order to properly manage the great variance in ball appearance. Moreover it does not need any negative training set, avoiding the difficulties to build it that occur when, as in the soccer context, negative examples abound. A large number of experiments have been carried out on image sequences acquired during real matches of the Italian Soccer “Serie A” championship. The reported experiments demonstrate the satisfactory capability of the proposed approach to recognize the ball. Title: ARTIFICIAL INTELLIGENCE FOR WOUND IMAGE UNDERSTANDING Author(s): Augustin Prodan, Mădălina Rusu, Remus Câmpean and Rodica Prodan Abstract: This paper presents an Artificial Intelligence framework for analyzing, processing and understanding wound images, to be used in teaching, learning and research activities. Based on new paradigms of the Artificial Intelligence, we intend to promote e-learning technologies in medical, pharmaceutical and health care domains. We use Java and XML technologies to build models for various categories of wounds, due to various aetiologies. Colour and texture information provide the infrastructure for a structured approach to non-invasive wound assessment. Relying on this information, we identify the main barriers to wound healing, such as tissue non-viable, infection, inflammation, moisture imbalance, or edge non-advancing. This framework provides the infrastructure for preparing e-learning scenarios based on practice and real world experiences. We make experiments for wound healing simulation using various treatments and compare the results with experimental observations. Our experiments are supported by XML based databases containing knowledge extracted from previous wound healing experiences and from medical experts knowledge. Our work is based on a continuous collaboration with physicians and wound care experts, because it is necessary to make a rigorous classification for various categories of wounds. To implement the e-learning tools, we use Java technologies for dynamic processes and XML technologies for dynamic content. Title: K-NN: ESTIMATING AN ADEQUATE VALUE FOR PARAMETER K Author(s): Bruno Borsato, Alexandre Plastino and Luiz Merschmann Abstract: The k-NN (k Nearest Neighbours) classification technique is characterized by its simplicity and efficient performance on many databases. However, the good performance of this method relies on the choice of an appropriate value for the input parameter k. In this work, we propose methods to estimate an adequate value for parameter k for any given database. Experimental results have shown that, in terms of predictive accuracy, k-NN using the estimated value for k usually outperforms k-NN with the values commonly used for k, as well as well-known methods such as decision trees and naive Bayes classification. Title: AWARENESS IN PROJECT INFORMATION SPACES FOR IMPROVED COMMUNICATION AND COLLABORATION Author(s): Stefan Boddy, Matthew Wetherill, Yacine Rezgui and Grahame Cooper Abstract: The paper argues that facilitating timely and contextually grounded communication could help improve both coordination and decision making / problem solving in the construction project process. The paper discusses the authors’ previous work in construction IT, as well as related literature, and the findings that have led to develop a framework for awareness of activity in project information spaces. A detailed description of the conceptual model and software architecture of the proposed resource awareness framework is given, followed by directions for future research Title: THE LINGUISTIC GENERALIZED OWA OPERATOR AND ITS APPLICATION IN STRATEGIC DECISION MAKING Author(s): José M. Merigó and Anna M. Gil-Lafuente Abstract: We introduce the linguistic generalized ordered weighted averaging (LGOWA) operator. It is a new aggregation operator that uses linguistic information and generalized means in the OWA operator. It is very useful for uncertain situations where the available information can not be assessed with numerical values but it is possible to use linguistic assessments. This aggregation operator generalizes a wide range of aggregation operators that use linguistic information such as the linguistic generalized mean (LGM), the linguistic weighted generalized mean (LWGM), the linguistic OWA (LOWA) operator, the linguistic ordered weighted geometric (LOWG) operator and the linguistic ordered weighted quadratic averaging (LOWQA) operator. We also introduce a new type of Quasi-LOWA operator by using quasi-arithmetic means in the LOWA operator. Finally, we develop an application of the new approach. We analyze a decision making problem about selection of strategies Title: THE GENERALIZED HYBRID AVERAGING OPERATOR AND ITS APPLICATION IN FINANCIAL DECISION MAKING Author(s): José M. Merigó and Montserrat Casanovas Abstract: We present the generalized hybrid averaging (GHA) operator. It is a new aggregation operator that generalizes the hybrid averaging (HA) operator by using the generalized mean. Then, we are able to generalize a wide range of mean operators such as the HA, the hybrid geometric averaging (HGA), the hybrid quadratic averaging (HQA), etc. The HA is an aggregation operator that includes the ordered weighted averaging (OWA) operator and the weighted average (WA). Then, with the GHA, we are able to get all the particular cases obtained by using generalized means in the OWA and in the WA such as the weighted geometric mean, the ordered weighted geometric (OWG) operator, the weighted quadratic mean (WQM), etc. We further generalize the GHA by using quasi-arithmetic means. Then, we obtain the quasi-arithmetic hybrid averaging (Quasi-HA) operator. Finally, we apply the new approach in a financial decision making problem Title: ORGANIZATIONAL MODELING AND ANALYSIS OF SAFETY OCCURRENCE REPORTING IN AIR TRAFFIC Author(s): Alexei Sharpanskykh, Sybert H. Stroeve and Henk A. P. Blom Abstract: An Air Traffic Organization (ATO) is a complex organization that involves many parties with diverse goals performing a wide range of tasks. Due to this high complexity, inconsistencies and performance bottlenecks may occur in ATOs. By analysis such safety- and performance-related problems of an ATO can be identified. To perform reliable and profound analysis automated techniques are required. A formal model specification that comprises both prescriptive aspects of a formal organization and autonomous behavioral aspects of agents forms the basis for such techniques. This paper describes how such a model specification is developed and analyzed in the frames of a simulation case of incident reporting in the ATO. Title: GENETIC FEATURE SELECTION AND STATISTICAL CLASSIFICATION OF VOIDS IN CONCRETE STRUCTURE Author(s): G. Acciani, G. Fornarelli, D. Magarielli and D. Maiullari Abstract: In this work simulated ultrasonic waveforms in a concrete specimen obtained by a software based on finite element method were used to develop an automatic inspection method. A piezoelectric transducer is used to generate stress waves that are reflected by voids. Then the waves are received by another transducer set at a fixed distance from the first one on the same specimen surface. Time and frequency features has been extracted from the waveforms, the most significant features have been chosen by a genetic feature selection and the classification performances were estimated referring to a k-NN classifier. Title: DYNAMIC SEARCH-BASED TEST DATA GENERATION FOCUSED ON DATA FLOW PATHS Author(s): Anastasis A. Sofokleous and Andreas S. Andreou Abstract: Test data generation approaches produce sequences of input values until they determine a set of test cases that can test adequately the program under testing. This paper focuses on a search-based test data generation algorithm. It proposes a dynamic software testing framework which employs a specially designed genetic algorithm and utilises both control flow and data flow graphs, the former as a code coverage tool, whereas the latter for extracting data flow paths, to determine near to optimum set of test cases according to data flow criteria. Experimental results carried out on a pool of standard benchmark programs demonstrate the high performance and efficiency of the proposed approach, which are significantly better compared to related search-based test data generation methods. Title: REAL TIME CLUSTERING MODEL Author(s): J. Cheng, M. R. Sayeh and M. R. Zargham Abstract: This paper focuses on the development of a dynamic system model in unsupervised learning environment. This adaptive dynamic system consists of a set of energy functions which create valleys for representing clusters. Each valley represents a cluster of similar input patterns. The system includes a dynamic parameter for the clustering vigilance so that the cluster size or the quantizing resolution can be adaptive to the density of the input patterns. It also includes a factor for invoking competitive exclusion among the valleys; forcing only one label to be assigned to each cluster. Through several examples of different pattern clusters, it is shown that the model can successfully cluster these types of input patterns and form different sizes of clusters according to the size of the input patterns. Title: AN INTELLIGENT DECISION SUPPORT SYSTEM FOR SUPPLIER SELECTION Author(s): R. J. Kuo, L. Y. Lee and Tung-Lai Hu Abstract: This study intends to develop an intelligen decision support system which integrates both fuzzy analytical hierarchy process (AHP) method and fuzzy data development analysis (DEA) for assisting organizations to make the supplier selection decision. A case study on an internationally well-known auto lighting OEM company shows that the proposed method is very well suitable for practical applications. Title: A MEMETIC-GRASP ALGORITHM FOR CLUSTERING Author(s): Yannis Marinakis, Magdalene Marinaki, Nikolaos Matsatsinis and Constantin Zopounidis Abstract: This paper presents a new memetic algorithm, which is based on the concepts of Genetic Algorithms (GAs), Particle Swarm Optimization (PSO) and Greedy Randomized Adaptive Search Procedure (GRASP), for optimally clustering N objects into K clusters. The proposed algorithm is a two phase algorithm which combines a memetic algorithm for the solution of the feature selection problem and a GRASP algorithm for the solution of the clustering problem. In this paper, contrary to the genetic algorithms, the evolution of each individual of the population is realized with the use of a PSO algorithm where each individual have to improve its physical movement following the basic principles of PSO until it will obtain the requirements to be selected as a parent. Its performance is compared with other popular metaheuristic methods like classic genetic algorithms, tabu search, GRASP, ant colony optimization and particle swarm optimization. In order to assess the efficacy of the proposed algorithm, this methodology is evaluated on datasets from the UCI Machine Learning Repository. The high performance of the proposed algorithm is achieved as the algorithm gives very good results and in some instances the percentage of the corrected clustered samples is very high and is larger than 96% Title: NURSE SCHEDULING BY COOPERATIVE GA WITH VARIABLE MUTATION OPERATOR Author(s): Shin-ya Uneme, Hikaru Kawano and Makoto Ohki Abstract: This paper proposes an effective mutation operator for the cooperative genetic algorithm (CGA) to solve a nurse scheduling problem. The nurse scheduling is very complex task for a clinical director in a general hospital. Even veteran director requires one or two weeks to create the schedule. Besides, we extend the nurse schedule to permit the change of the schedule. This permission has explosively increased computation time for the nurse scheduling. We propose the new mutation operator to solve the problem. The CGA with the new mutation operator has brought surprising effective results. Title: ANALYSING MULTIDIMENSIONAL DATABASES USING DATA MINING AND BUSINESS INTELLIGENCE TO PROVIDE DECISION SUPPORT Author(s): Rajveer Singh Basra and Kevin J. Lu Abstract: After relational databases and data warehouses, the techniques used for data management have entered the next phase, Business Intelligence Tools. These tools provide enhanced business functionality by integrating data mining and advanced analytics into data warehouse systems to provide comprehensive support for the purposes of data management, analysis and decision support. In this paper, we introduce an on-going project aimed at developing BI tools on data warehouse systems for multi-dimensional analysis. A prototype has been developed and has been tested for two examples, which are also reported in this paper. Title: DISCOVERING EXPERT’S KNOWLEDGE FROM SEQUENCES OF DISCRETE EVENT CLASS OCCURRENCES Author(s): Le Goc Marc and Benayadi Nabil Abstract: his paper is concerned with the discovering of expert's knowledge from a sequence of alarms provided by a knowledge based system monitoring a dynamic process. The discovering process is based on the principles and the tools of the Stochastic Approach. In this framework, a sequence is represented under the form of a Markov chain from which binary relations between discrete event classes can be find and represented as abstract chronicles models. The problem with this approach is to reduce the search space as close as possible to the relations between the process variables. To this aim, we propose an adaptation of the J-Measure to the Stochastic Approach, the BJ-Measure, to build an entropic based heuristic that help in finding abstract chronicle models revealing strong relations between the process variables. The result of the application of this approach to a real world system, the Sachem system that controls the blast furnace of the Arcelor-Mittal Steel group, is provided in the paper, showing how the combination of the Stochastic Approach and the Information Theory allows finding the a priori expert's knowledge between blast furnace variables from a sequence of alarms. Title: ON CHECKING TEMPORAL-OBSERVATION SUBSUMPTION IN SIMILARITY-BASED DIAGNOSIS OF ACTIVE SYSTEMS Author(s): Gianfranco Lamperti, Federica Vivenzi and Marina Zanella Abstract: Similarity-based diagnosis of large active systems is supported by reuse of knowledge generated for solving previous diagnostic problems. Such knowledge is cumulatively stored in a knowledge-base, when the diagnostic session is over. When a new diagnostic problem is to be faced, the knowledge-base is queried in order to possibly find a similar, reusable problem. Checking problem-similarity requires, among other constraints, that the observation relevant to the new problem be subsumed by the observation relevant to the problem in the knowledge-base. However, checking observation-subsumption, following its formal definition, is time and space consuming. The bottleneck lies in the generation of a nondeterministic automaton, its subsequent transformation into a deterministic one (the index space of the observation), and a regular-language containment-checking. In order to speed up the diagnostic process, an alternative technique is proposed, based on the notion of coverage. Besides being effective, subsumption-checking via coverage is also efficient because no index-space generation or comparison is required. Experimental evidence supports this claim. Title: A MULTI AGENT SYSTEM MODEL TO EVALUATE THE DYNAMICS OF A COLLABORATIVE NETWORK Author(s): Ilaria Baffo, Giuseppe Confessore, Graziano Galiano and Silvia Rismondo Abstract: The paper provides a model based on the Multi Agent System (MAS) paradigm able to give a methodological base to evaluate the dynamics in a collaborative environment. The model dynamics is strictly driven by the competence concept. In the provided MAS the agents represent the actors operating on a given area. In particular, it is supposed the agents being composed by three distinct typologies: (i) the territorial agent, (ii) the enterprise agent, and (iii) the public agent. Each agent has its local information and goals, and interacts with the others by using an interaction protocol. The decision-making processes and the competences characterize in a specific way each one of the different agent typologies working on the system. Title: NEURAL NETWORKS APPLICATION TO FAULT DETECTION IN ELECTRICAL SUBSTATIONS Author(s): Luiz Biondi Neto, Pedro Henrique Gouvêa Coelho, Alexandre Mendonça Lopes, Marcelo Nestor da Silva and David Targueta Abstract: This paper proposes an application of neural networks to fault detection in electrical substations, particularly to the Parada Angélica Electrical Substation, part of the AMPLA Energy System provider in Rio de Janeiro, Brazil. For research purposes, that substation was modeled in a bay oriented fashion instead of component oriented. Moreover, the modeling process assumed a substation division in five sectors or set of bays comprising components and protection equipments. These five sectors are: 11 feed bays, 2 capacitor bank bays, 2 general/secundary bays, 2 line bays and 2 backward bays. Electrical power engineer experts mapped 291 faults into 134 alarms. The employed neural networks, also bay oriented, were trained using the Levenberg-Marquardt method, and the AMPLA experts validated training patterns, for each bay. The test patterns were directly obtained from the SCADA (Supervisory Control And Data Acquisition) digital system signal, suitably decoded were supplied by AMPLA engineers. The resulting maximum percentage error obtained by the fault detection neural networks was within 1.5 % which indicates the success of the used neural networks to the fault detection problem. It should be stressed that the human experts should be the only ones responsible for the decision task and for returning the substation safely into normal operation after a fault occurrence. The role of the neural networks fault detectors are to support the decision making task done by the experts. Title: AN ONTOLOGY DRIVEN DATA MINING PROCESS Author(s): Laurent Brisson and Martine Collard Abstract: This paper deals with knowledge integration in a data mining process. We suggest to model domain knowledge during business understanding and data understanding steps in order to build an ontology driven information system (ODIS). This ODIS, dedicated to data mining tasks allows to use expert knowledge for efficient data selection, data preparation and model interpretation. Finally, we present a part-way interestingness measure between objective and subjective measure in order to evaluate model relevance according to expert knowledge. Title: MACHINE GROUPING IN CELLULAR MANUFACTURING SYSTEM USING TANDEM AUTOMATED GUIDED VEHICLE WITH ACO BASED SIX SIGMA APPROACH Author(s): Iraj Mahdavi, Babak Shirazi and Mohammad Mahdi Paydar Abstract: Effective design of material handling devices is one of the most important decisions in cellular manufacturing system. Minimization of material handling operations could lead to optimization of overall operational costs. An automated guided vehicle (AGV) is a driverless vehicle used for the transportation of materials within a production plant partitioned into cells. The tandem layout is according to dividing workstations to some non-overlapping closed zones that in each zone a tandem automated guided vehicle (TAGV) is allocated for internal transfers. Also, among adjacent loops some places are determined for exchanging semi-produced parts. This paper illustrates a non-linear multi-objective problem for minimizing the material flow intra and inter-loops and minimization of maximum amount of inter cell flow, considering the limitation of TAGV work-loading. For reducing variability of material flow and establishing balanced loop layout, some new constraints have been added to the problem based on six sigma approach. Due to the complexity of the problem, ant colony optimization (ACO) algorithm is used for solving this model. Finally this approach has been compared with the existing methods to demonstrate the advantages of the proposed model. Title: NEGOTIATION SUPPORTED THROUGH RISK ASSESSMENT Author(s): Sérgio Assis Rodrigues, Jano Moreira de Souza and Melise de Paula Abstract: In organizations, decision-making processes usually require great effort in solving conflicts. These disagreements are generally time-consuming and jeopardize negotiations. Thus, negotiation planning is the key factor to balance negotiator’s expectations and reach an agreement. In this scenario, risk management tools play an important role in identifying possibly controversial points of view. This paper aims at presenting software which supports negotiation through risk assessment. The proposed software, named here RisNeg presents mechanisms which are able to identify, analyze, respond to, monitor, and control negotiation risks. RisNeg also provides Decision Trees to indicate the real chances of achieving agreement. Therefore, the use of such software may minimize the conflicts in the decision-making process. Title: ANYTIME AHP METHOD FOR PREFERENCES ELICITATION IN STEREOTYPE-BASED RECOMMENDER SYSTEM Author(s): Lior Rokach, Amnon Meisels and Alon Schclar Abstract: In stereotype-based recommendation systems, user profiles are represented as an affinity vector of stereotypes. Upon the registration of new users, the system needs to assign the new users to existing stereotypes. The AHP (Analytic Hierarchy Process) methodology can be used for initial elicitation of user preferences. However, using the AHP procedure as-is will require the user to respond to a very long set of pairwise comparison questions. We suggest a novel method for converting AHP into an anytime approach. At each stage, the user may choose not to continue. However, the system is still able to provide some classification into a stereotype. The more answers the user provides, the more specific the classification becomes. Title: MULTICRITERIA DECISION SUPPORT SYSTEM Author(s): Mariana Vassileva, Vassil Vassilev, Boris Staykov, Krassimira Genova and Danail Dochev Abstract: The paper presents a multicriteria decision support system, called MultiOptima. It consists of two independent parts - the MKA-2 system and the MKO-2 system. The MultiOptima system is designed to support the decision maker (DM) in modelling and solving different problems of multicriteria analysis and linear and linear integer problems of multicriteria optimization. The system implements four methods for multicriteria analysis, as well as an innovative generalized interactive method for multicriteria optimization with variable scalarization and parameterization, which can apply twelve scalarizing problems and is applicable for different ways of defining preferences by the decision maker. The class of the solved problems, the system structure, the implemented methods and the graphical user interface of the MKA-2 and MKO-2 systems are discussed in the paper. The MultiOptima system can be used both for education and for solving of real-life problems. Title: DETECTION OF INCOHERENCES IN A TECHNICAL AND NORMATIVE DOCUMENT CORPUS Author(s): Susana Martin-Toral, Gregorio I. Sainz-Palmero and Yannis Dimitriadis Abstract: This paper is focused on the problems and effects generated by the use of a document corpus with mistakes, content incoherences amongst its connected documents and other errors. The problem introduced in this paper is very relevant in any area of human activity when this corpus is employed as base element in the relationships amongst company partners, legal support, etc., and the way in which these incoherences can be detected. These problems can appear in several ways, and the produced effects are different, but a common situation exists in those areas of activity where a lot of linked documents must be generated, managed and updated by different authors. This paper describes some ways of this problem in the case of a technical document corpus employed amongst partners, and the solution framework developed for this case. Several types of incoherences have been detected and formulated, connected with problems described in other research areas such as information extraction and retrieval, text mining, document interpretation and others, but all of them have been bounded and introduced from the point of view of document incoherences and their effects, in company context specially. Finally the computational architecture and methodology employed are described and the more than reasonable initial results are discussed. Title: ONTOLOGICAL APPROACH FOR THE CONFORMITY CHECKING MODELING IN CONSTRUCTION Author(s): Catherine Faron-Zucker, Nhan Le Thanh, Anastasiya Yurchyshyna and Alain Zarli Abstract: This paper presents an ontological method and a corresponding system C3R called to semi-automatically check the conformity of a construction project against a set of construction norms. Construction projects are represented by RDF graphs and norms are formalized as SPARQL queries. The conformity-checking process itself is based on the matching of these queries and graphs. The efficiency of the presented approach relies on two main keystones: the ontological representation of construction regulation knowledge and the knowledge extraction process, which is based on the achieved ontological knowledge and is conducted according to the specific objective of conformity checking. The reasoning model also takes into account the meta-knowledge acquired and formalized with the help of CSTB experts, which allows controlling and managing the checking process itself. The queries representing construction norms are annotated and organised according to these annotations. Finally, the annotations of construction norms help to explain the results of the validation process, especially in case of failure. Title: A NOVEL TERM WEIGHTING SCHEME FOR A FUZZY LOGIC BASED INTELLIGENT WEB AGENT Author(s): Ariel Gómez, Jorge Ropero, Carlos León and Alejandro Carrasco Abstract: Term Weighting is one of the most important tasks for Information Retrieval. To solve Term Weighting problem, many authors have considered Vector Space Model, and specifically, they have used the TF-IDF method. As this method does not take into account some of the features of terms, we propose a novel alternative fuzzy logic based method for Term Weighting in Information Retrieval. Term Weighting is an essential task for the Web Intelligent Agent we are developing. Intelligent Agent mode of operation is also explained in the paper. An application of the Intelligent Agent will be in operation soon for the University of Seville web portal. Title: EXPLORATIVE ASSOCIATION MINING - CROSS-SECTOR KNOWLEDGE FOR THE EUROPEAN HOME TEXTILE INDUSTRY Author(s): Jessica Huster, Michael Spenke and Gerrit Bury Abstract: Trend-related industries like the European home-textile industry have to keep pace of the quickly changing product trends and consuming behaviour to avoid faulty production and the associated severe economic risk. The industry sector lacks cross-sector knowledge and knowledge about its end consumers. Click and ordering data reflect the consuming behaviour as well as the preferences and their changes. They are therefore, beside other indicators, an important trend indicator, which is not used up to now by the European home textile industry. In this paper, we report on the design and solution of the Trend Analyser association mining component that helps designers and product developers to better understand their end consumers. Our component uses explorative data mining to perform a market basket analysis and identify interesting associations. Such associations, once found, can help the decision makers of the home-textile industry to understand and study the consuming behaviour and identify early changes in their preferences in order to perform a better production planning Title: CONGESTION CONTROL SYSTEM WITH PID CONTROLLER USING FUZZY ADAPTATION MECHANISM Author(s): Magdalena Turowska Abstract: The congestion control in computer network is a problem of controlling a specific object such as a computer network. Several attempts have been made to develop congestion controllers, mostly using linear control theory. The conventional controller PID generally does not work well for nonlinear, uncertain, higher order and time-delayed systems. The paper provides an adaptation mechanism designed to prevent unstable behavior, with fuzzy rules, and with an inference mechanism that identifies the possible sources of nonlinear behavior. The adaptation mechanism can be designed to adjust PID controller tuning parameters when oscillatory behavior is detected. Tests in nonlinear and uncertainty process are performed. Title: MULTICRITERIA DECISION AID USE FOR CONFLICTING AGENT PREFERENCES MODELING UNDER NEGOTIATION Author(s): Noria Taghezout and Abdelkader Adla Abstract: In most agent applications, the autonomous components need to interact. They need to communicate in order to resolve differences of opinion and conflicts of interest. They also need to work together or simply inform each other. It is however important to note that a lot of existing works do not take into account the agent preferences. In addition, individual decisions in multi-agent domains are rarely enough for producing optimal plans which satisfy all the goals. Therefore, agents need to cooperate to generate the best multi-agent plan through sharing tentative solutions, exchanging sub goals, or having other agents to satisfy their goals. In this paper, we propose a new negotiation mechanism independent of domain properties to handle real-time goals. The mechanism is based on The well-known Contract net Protocol. Integrated Station of Production agents will be equipped with a sufficient behavior to carry out practical operations and simultaneously react to the complex problems caused by the dynamic scheduling in real situations. These agents express their preferences by using ELECTRE III method in order to resolve differences. The approach is tested for simple scenarios. Title: BAYESIAN-NETWORKS-BASED MISUSE AND ANOMALY PREVENTION SYSTEM Author(s): Pablo Garcia Bringas, Yoseba K. Penya, Stefano Paraboschi and Paolo Salvaneschi Abstract: Network Intrusion Detection Systems (NIDS) aim at preventing network attacks and unauthorised remote use of computers. More accurately, depending the kind of attack it targets, an NIDS can be oriented to detect misuses (by defining all possible attacks) or anomalies (by modelling legitimate behaviour and detecting those that do not fit on that model). Still, since their problem knowledge is restricted to possible attacks, misuse detection fails to notice anomalies and vice versa. Against this, we present here ESIDE-Depian, the first unified misuse and anomaly prevention system based on Bayesian Networks to analyse completely network packets, and the strategy to create a consistent knowledge model that integrates misuse and anomaly-based knowledge. Finally, we evaluate ESIDE-Depian against well-known and new attacks showing how it outperforms a well-established commercial NIDS. Title: AN EFFICIENT HYBRID METHOD FOR CLUSTERING PROBLEMS Author(s): H. Panahi and R. Tavakkoli-Moghaddam Abstract: This paper presents a hybrid efficient method based on ant colony optimization (ACO) and genetic algorithms (GA) for clustering problems. This proposed method assumes that agents of ACO has life cycle which is variable and changes by a special function. We also apply three local searches on the basis of heuristic rules for the given clustering problem. This proposed method is implemented and tested on two real datasets. Further, its performance is compared with other well-known meta-heuristics, such as ACO, GA, simulated annealing (SA), and tabu search (TS). At last, paired comparison t-test is also applied to proof the efficiency of our proposed method. The associated output gives very encouraging results; however, the proposed method needs longer time to proceed. Title: MODEL FOR TRUST WITHIN INFORMATION TECHNOLOGY MANAGEMENT Author(s): Dayse de Mello Benzi, Rafael Timóteo de Sousa Júnior, Christophe Bidan and Ludovic Mé Abstract: This work has for objective to present a model of trust in the management of the information technology. It presents relevant aspects on the use of trust on the Information Technology (IT) Management. It comments on the definitions of trust relates the contemporary business environment to crescent risks, as far as trust is concerned, based on the complexity deriving from globalized relationships. It focuses IT management, emphasizing the necessity of alignment with the organizational strategies and the harmonization with the end-activity of the companies. To do so it approaches the impacts of the trust in IT management where recent studies identify that organizations whose IT management is business-focused run less risks and get advantages in relation to others. In this context, it approaches the understanding that the safe route is tied to the trust, which may provide highly desirable results to management, as long as they are controlled and measured, making IT organizations to acquire greater effectiveness in their alignment to the organizational strategy. Title: OPTIMAL LAYOUT SELECTION USING PETRI NET IN AN AUTOMATED ASSEMBLING SHOP Author(s): Iraj Mahdavi, Mohammad Mahdi Paydar, Babak Shirazi and Magsud Solimanpur Abstract: Abstract In today's competitive manufacturing systems, it is crucial to respond quickly to the demand of customers and to decrease total cost of production. To achieve higher performance of automated assembling shop, it is needed to utilize methods to minimize production cycle time (makespan) and work-in-process (WIP) in buffers. This paper intends to focus on the selection of optimal layout based on allocation of machines to different locations as they can perform similar operations with different processing times. The time Petri net (TPN) has been used to illustrate the applications of proposed model in case study. Title: COMPONENT-BASED SUPPORT FOR KNOWLEDGE-BASED SYSTEMS Author(s): Sabine Moisan Abstract: Software development of knowledge-based systems is difficult. To alleviate this task we propose to apply software engineering techniques, such as component frameworks. This paper investigates {\sc Blocks}, a component framework for designing and reengineering knowledge-based system inference engines. {\sc Blocks} provides reusable building blocks common to various engines, independently on their task or application domain. We have used its components to build several engines for various tasks (planning, classification, model calibration) in different domains. The approach appears well fitted to knowledge-based system generators; it allows a significant gain in time, as well as it improves sofware readability and safeness. Title: HIPPOCRATIC MULTI-AGENT SYSTEMS Author(s): Ludivine Crépin, Laurent Vercouter, Olivier Boissier, Yves Demazeau and François Jacquenet Abstract: The current evolution of Information Technology leads to the increase of automatic data processing over multiple information systems. In this context, the lack of user’s control on their personal data leads to the crucial question of their privacy preservation. A typical example concerns the disclosure of confidential identity information, without the owner’s agreement. This problem is stressed in multi-agent systems (MAS) where users delegate their control on their personal data to autonomous agents. Interaction being one of the main mechanism in MAS, sensitive information exchange and processing are a key issue with respect to privacy. In this article, we propose a model, ”Hippocratic Multi-Agent System” (HiMAS), to tackle this problem. This model defines a set of principles bearing on an agency to preserve the users’ privacy and agents’ privacy. In order to illustrate our model, we have chosen the concrete application of decentralized calendars management. Title: A NEW STATISTICAL MODEL - TO DESIGNING A DECISION SUPPORT SYSTEM Author(s): Morteza Zahedi, Ali Pouyan and Esmat Hejazi Abstract: In this paper we propose a new statistical approach to simulate a technical support center as a help desk for a web site which makes use of scientific documents and university protocols for the students and lecturers. In contrary to the existing statistical approaches which are modelled by general statistical graphs named Bayesian network or decision graph, we propose a statistical approach which can be used consistently in different domains and problem spaces without any need for a new designing regarding the new domain. Furthermore, the proposed statistical model which is trained by a set of training data collected from the experts in a special field is applicable to high-dimensional, large-sized, non-geometric-based data for decision making support. Title: SOLVING THE UNIVERSITY COURSE TIMETABLING PROBLEM BY HYPERCUBE FRAMEWORK FOR ACO Author(s): Jose Miguel Rubio L., Broderick Crawford L. and Franklin Johnson P. Abstract: We present a resolution technique of the University course Timetabling problem (UCTP), this technique is based in the implementation of Hypercube framework using the Max-Min Ant System. We presented the structure of the problem and the design of resolution using this framework. A simplification of the UCTP problem is used, involving three types of hard restrictions and three types of soft restrictions. We solve experimental instances and competition instances the results are presented of comparative form to other techniques. We presented an appropriate construction graph and pheromone matrix representation. A representative instance is solved in addition to the schedules of the school of Computer science engineering at Pontifical Catholic University of Valparaiso. The results obtained for this instance appear. Finally the conclusions are given. Title: THE DESIGN AND IMPLEMENTATION OF THE INTEGRATED DECISION SUPPORT SYSTEM ON LABOR MARKET Author(s): Dongjin Yu, Shixin Feng and Guangming Wang Abstract: Nowadays labor resources in China face the fierce situation and competition in their employment. The decision support system on labor market helps the government to have a sound grip of the composition, migration and trend of regional supply and requirement for labor resources. This paper proposes an integrated architecture for decision support system on regional labor market, which leverages the Service Oriented Architecture and provides the end-user oriented flexibility. The real system implemented on this architecture, called as LMDSS, is also presented to show the features of the multi-dimension analysis of job introducing and the prediction of unemployment rates in the future. Title: DISCOVERING MULTI-PERSPECTIVE PROCESS MODELS Author(s): Francesco Folino, Gianluigi Greco, Antonella Guzzo and Luigi Pontieri Abstract: Process Mining techniques exploit the information stored in the logs of a variety of transactional systems in order to extract some high-level process model, which can be used for both analysis and design tasks. The vast majority of these techniques focus on structural'' (control-flow oriented) aspects of the process, in that they only consider what elementary activities were executed and in which ordering. In this way, any other non-structural'' information, usually kept in real log systems (e.g., activity executors, parameter values, and time-stamps), is completely disregarded, yet being a potential source of knowledge. In this paper, we move towards overcoming this limitation by proposing a novel approach to the discovery of process models, where the behavior of a process is characterized from both structural and non-structural viewpoints. In a nutshell, different variants of the process are recognized through a structural clustering approach, and represented with a collection of specific workflow models, which provide the analyst with an effective description of typical behavior classes. Relevant correlations between these classes and non-structural properties are made explicit through a rule-based classification model, which can be exploited for both explanation and prediction purposes. Results on real-life application scenario evidence that discovered models are often very accurate and capture important knowledge on the process behavior. Title: A SIMILARITY MEASURE FOR MUSIC SIGNALS Author(s): Thibault Langlois and Gonçalo Marques Abstract: One of the goals in the field of Music Information Retrieval is to obtain a measure of similarity between two musical recordings. Such a measure is at the core of automatic classification, query, and retrieval systems, which have become a necessity due to the ever increasing availability and size of musical databases. This paper proposes a method for calculating a similarity distance between two music signals. The method extracts a set of features from the audio recordings, models the features, and determines the distance between models. While further work is needed, preliminary results show that the proposed method has the potential to be used as a similarity measure for musical signals. Title: AUTOMATIC CLASSIFICATION OF MIDI TRACKS Author(s): Alexandre Bernardo and Thibault Langlois Abstract: This paper presents a system for classifying MIDI tracks according to six predefined classes: Solo, Melody, Melody and Solo, Drums, Bass and Harmony. No metadata present in the MIDI file is used. The MIDI data (pitch of notes, onset time and note durations are preprocessed in order to extract a set of features. These data sets are then used with several classifiers (Neural Networks, k-NN). Title: MANAGING CHARACTERISTIC ORGANIZATION KNOWLEDGE IN COLLABORATIVE NETWORKS Author(s): Ekaterina Ermilova and Hamideh Afsarmanesh Abstract: Modeling and management of the knowledge/information related to the organizations is fundamental to efficient operation of the Collaborative Networked Organizations (CNOs). Continuous changes in different aspects of involved organizations, is an unavoidable reflection to the dynamic changes in the demands of stakeholders and customers in the market / society. Organizations’ profiles, and especially their competency characteristics tailored to the requirements of opportunities emerging in the market, need to be modeled and managed. Furthermore, mechanisms are required for handling the flexible representation and processing of the organizations’ knowledge. Both research and practice have shown that the formation of collaborative networked organizations requires pre-establishment of a cluster, also called a breeding environment. In this paper we address the Virtual organizations Breeding Environments (VBEs), which provide the necessary conditions and support for configuration and creation of Virtual Organizations (VOs). Using the ontology engineering approaches, we present an approach for modeling of organizations’ knowledge inside VBEs, and specify the ontology for their profiles and competencies. Furthermore, we present the required mechanisms for management of the organizations’ knowledge, and specify the functionality required to manipulate organizations’ information through the life cycle of VBEs. The paper also addresses the logical design of a database for storage of organizations’ information and for the visualization of organizations’ profile and competency knowledge. Title: INTEGRATING SIMULATION INTO A WEB-BASED DECISION SUPPORT TOOL FOR THE COST EFFECTIVE PLANNING OF VESSEL DISMANTLING PROCESSES Author(s): Charalambia Pylarinou, Dimitrios Koumanakos, Antonios Hapsas, Nikos Karacapilidis and Emmanuel Adamides Abstract: Vessel dismantling is a complex process, which requires advanced planning subject to environmentally safe as well as cost and energy effective standards. Aiming to facilitate stakeholders involved in such activities and to augment the quality of their related decision making, this paper presents an innovative decision support system that takes into account the diversity of the associated constraints (i.e. resources as well as environmental issues, health and safety of the workforce, etc.). The proposed system aids stakeholders make decisions on qualitative issues such as the appropriateness of a disposal methodology or the level of the safety of the workforce in a specific dismantling yard. Being seamlessly integrated with a visual interactive simulation environment, the system facilitates the collaborative design and redesign of dismantling processes. Title: SEMI-AUTOMATIC PARTITIONING BY VISUAL SNAPSHOPTS Author(s): Rosa Matias, João-Paulo Moura, Paulo Martins and Fátima Rodrigues Abstract: It is stated that a closer intervention of experts in knowledge discovery can complement and improve the effectiveness of results. Ally to the automatic methods visualization methods can be built-in previous, subsequent or in intermediate stages; in partitioning methods visualization of intermediate results raises questions about what is a relevant stopping stage. In this paper our efforts are in partitioning data with a spatial component. Stopping stages are configured by experts and are a function of both spatial and non-spatial data. In this work a data mining workflow is proposed to control the visualization of intermediate results being defined concepts like data mining transaction, data mining save point and data mining snapshot. For abstracting visualization of results novel visual metaphors are changed allowing a better exploration of produced clusters. In knowledge discovery, experts are responsible for interpret and validate final results; certainly it would be appropriate validating intermediate results, avoiding, for instance, losing time when in disagreement (leave the process) starting it with new hypnoses, allow data reduction when a cluster is found, understand deeper attribute combination. Title: FUZZY INDUCED AGGREGATION OPERATORS IN DECISION MAKING WITH DEMPSTER-SHAFER BELIEF STRUCTURE Author(s): José M. Merigó and Montserrat Casanovas Abstract: We develop a new approach for decision making with Dempster-Shafer theory of evidence when the available information is uncertain and it can be assessed with fuzzy numbers. With this approach, we are able to represent the problem without losing relevant information, so the decision maker knows exactly which are the different alternatives and their consequences. For doing so, we suggest the use of different types of fuzzy induced aggregation operators in the problem. Then, we can aggregate the information considering all the different scenarios that could happen in the analysis. As a result, we get new types of fuzzy induced aggregation operators such as the belief structure – fuzzy induced ordered weighted averaging (BS-FIOWA) and the belief structure – fuzzy induced hybrid averaging (BS-FIHA) operator. We study some of their main properties. We also develop an application of the new approach in a financial decision making problem about selection of financial strategies. Title: AN APPROXIMATE PROPAGATION ALGORITHM FOR PRODUCT-BASED POSSIBILISTIC NETWORKS Author(s): Amen Ajroud, Mohamed Nazih Omri, Salem Benferhat and Habib Youssef Abstract: Product-Based Possibilistic Networks appear as an important tool to efficiently represent possibility distributions. They are close to Probabilistic Networks since both handle numerical values. Inference approach is a crucial task to propagate information into network when special information called evidence is observed. This task becomes more difficult when the network is multiply connected. In this paper, we propose an Approximate Inference Algorithm for Product-Based Possibilistic Networks. This inference task is an adaptation of the probabilistic approach “Loopy Belief Propagation” (LBP) for possibilistic networks. Title: A PARTIAL-VIEW COOPERATION FRAMEWORK BASED ON THE SOCIOLOGY OF ORGANIZED ACTION Author(s): Carmen Lucia Ruybal dos Santos, Sandra Sandri and Christophe Sibertin-Blanc Abstract: In this work, we address the modeling of a fragment of the Sociology of Organized Action, making it possible to deal with a hierarchy of the resources of an organization, allowing each member of this organization to have his own view of the state offairs. We illustrate our approach with an example from the sociology literature and indicate its potential use in crisis management and command and control. Title: RESEARCH ON LEARNING-OBJECT MANAGEMENT Author(s): Erla Morales, Francisco García and Ángela Barrón Ruiz Abstract: In todays’ world learning objects (LOs) management is an important subject to study due to their interoperability possibilities. However it is not very promoted because there are some things to solve. LOs need to be designed in order to achieve educational goals and their metadata need to provide useful information to reuse them in a suitable way. In this paper we present an experience in learning object design together with pedagogical sense, which were accompanied with our proposed metadata information and evaluated according to our LO instrument. Finally we present our results and analysis of this experience. Title: FACILITATION SUPPORT FOR ON-LINE FOCUS GROUP DISCUSSIONS BY MESSAGE FEATURE MAP Author(s): Noriko Imafuji Yasui, Shunsuke Saruwatari, Xavier Llorà and David E. Goldberg Abstract: In order to build marketing strategies, face-to-face focus group discussions have been one of the reliable approaches for collecting variety of ideas and opinions for long years. Although various network-based communication tools have been available, on-line focus group discussions have not been popular yet. This is due to complications in facilitation of on-line discussions. The goal of this paper is to maximize the profit from on-line focus group discussions by supporting facilitators' task. First, we propose a message feature map and two metrics for measuring message feature; centrality and novelty. The message feature map plots discussions messages on centrality-novelty plane, and gives us intuitive understanding of the discussions in various aspects. Then, reporting experimental results by using real data collected in on-line focus group discussions, we discuss how we can utilize the message feature map for the effective facilitations. Title: AN EVALUATION INSTRUMENT FOR LEARNING OBJECT QUALITY AND MANAGEMENT Author(s): Erla M. Morales Morgado, Francisco J. García Peñalvo and Ángela Barrón Ruiz Abstract: On of the most important thing for getting a suitable LO management is to have the possibility to value them and manage them according to their quality. LOs need to be evaluated taking into account technical and pedagogical points of view. Also, LOs evaluation need to consider their characteristics as units of learning, size or granularity level. On this basis this paper consist on a proposal for a specific LOs evaluation instrument which take into account the things mentioned before and the results of its expert evaluation. Title: MULTIAGENTS IN MANUFACTURING PRODUCTION LINES - DESIGNING FAULT TOLERANT ADAPTIVE PRODUCTION LINES Author(s): Eugen Volk Abstract: The usage of multiagents for autonomous, fault tolerant and flexible adaptation of production lines is not widespread in industrial applications. This paper presents usage of multi-agents in industrial applications by showing a realization within a fictive example of a car body manufacturing production line. The mechanism presented her for coordination of agents in multiagent based production line factory are based on contract net protocol and uses ontological matching of individual task ontologies, to find an appropriate contractor. Title: CELLULAR AUTOMATA BASED MODELING OF THE FORMATION AND EVOLUTION OF SOCIAL NETWORKS: A CASE IN DENTISTRY Author(s): Rubens A. Zimbres, Eliane P. Z. Brito and Pedro P. B. de Oliveira Abstract: The stability and evolution of networks is a research area that is not well explored. There is an inadequate focus to the partner characteristics and to the environment influence over the evolution process. This study analyzed social network formation in its dynamic aspect through agent-based modeling, using cellular automata. Relationships and decisions were modeled. Along the interactions the emergence of consensus in the network could be observed and the results show that the more impulsive the individuals in a network, the stronger will be the ties among them. Convergence of partner selection criteria could also be noticed. Additionally, a structural hole could be shown to have a local influence on how it moves an agent away from the network. This work waves positively towards using cellular automata in social (in the case, business) networks modeling, regardless of their well-known limitations for these kinds of problems. Title: RULES AS SIMPLE WAY TO MODEL KNOWLEDGE - CLOSING THE GAP BETWEEN PROMISE AND REALITY Author(s): Valentin Zacharias Abstract: There is a considerable gap between the potential of rules bases to be a simpler way to formulate high level knowledge and the reality of tiresome and error prone rule bases creation processes. Based on the experience from three rule base creation projects this paper identifies reasons for this gap between promise and reality and proposes steps that can be taken to close it. An architecture for a complete support of rule base development is presented. Title: COST-BENEFIT ANALYSIS FOR THE DESIGN OF PERSONAL KNOWLEDGE MANAGEMENT SYSTEMS Author(s): Max Völkel and Andreas Abecker Abstract: Knowledge Management (KM) tools have become an established part of Enterprise Information Systems in the recent years. While traditional KM initiatives typically address knowledge exchange within project teams, communities of practice, within a whole enterprise, or even within the extended enterprise (customer knowledge management, KM in the supply chain, ...), the relatively new area of Personal Knowledge Management (PKM) investigates how knowledge workers can enhance their productivity by better encoding, accessing, and reusing their personal knowledge. In this paper, we present a cost-benefit analysis of PKM -- where benefit comes from efficiently finding task-specific, useful knowledge items, and costs come from search efforts as well as externalisation and (re-)structuring efforts for the personal knowledge base. Title: TOWARDS ENCOURAGING CONTRIBUTION TO A SEMANTIC WIKI-BASED EXPERIENCE REPOSITORY Author(s): Alessandro dos S. Borges and Germana M. da Nóbrega Abstract: Recent trends on organizational context include tools allowing collaboration under a perspective of emergent structure rather than a priori embeded one. While these new tools are suitable for promoting spontaneous large-scale content evolution and/or communication, bringing them to companies intranets seems still a challenge. In educational settings, findings when applying mechanisms that provide awareness on community needs improve quality and help to achieve on-time content evolution. We are currently defining a mechanism inspired both in a particular company and based on theoretical background from previous work from literature. Title: EVS PROCESS MINER: INCORPORATING IDEAS FROM SEARCH AND ETL INTO PROCESS MINING Author(s): Jon Espen Ingvaldsen and Jon Atle Gulla Abstract: Search is the process of locating information that matches a given query. Extract, Transform and Load (ETL) editors provide a user friendly and flexible environment for creating operation chains and digging into and explore data. In this paper, we describe the implementation of a process mining framework, the EVS Process Miner, which incorporates ideas from search and ETL. We also describe two industrial cases that show the value of applying search and graphical operation chain environments in process mining work. Title: MULTIPLICATIVE NEURAL NETWORK WITH SWARM INTELLIGENCE FOR MULTICARRIER TRANSMITTER Author(s): Nibaldo Rodriguez, Claudio Cubillos and Orlando Duran Abstract: In this paper, we propose a novel effective distortion compensation scheme suitably developed to reduce nonlinearities of the traveling wave tube amplifier (TWTA) in orthogonal frequency division multiplexing (OFDM) systems at the multicarrier transmitter side. This compensator is developed in order to combine in the most effective way the capability of the quantum particle swarm optimization technique and multiplicative neural networks. The compensator effectiveness has been tested through computer simulations. The improvements in the reduction of constellation warping and enhanced performance in terms of the bit error rate (BER) are offered for the TWTA with an input back-off level of $0$ dB. Title: SEMANTIC ANNOTATION OF EPC MODELS IN ENGINEERING DOMAINS BY EMPLOYING SEMANTIC PATTERNS Author(s): Andreas Bögl, Michael Schrefl, Gustav Pomberger and Norbert Weber Abstract: Extracting process patterns from EPC (Event-Driven Process Chain) models requires to perform a semantic analysis of EPC functions and events. An automated semantic analysis faces the problem that an essential part of the EPC semantics is bound to natural language expressions in functions and events with undefined process semantics. The semantic annotation of natural language expressions provides an adequate approach to tackle this problem. This paper introduces a novel approach that enables an automated semantic annotation of EPC functions and events. It employs semantic patterns to analyze the textual structure of natural language expressions and to relate them to instances of a reference ontology. Thus semantically annotated EPC model elements are input for subsequent semantic analysis. Title: A DECISION SUPPORT SYSTEM FOR INTEGRATED ASSEMBLY AND DISASSEMBLY PLANNING USING A GA APPROACH Author(s): Yuan-Jye Tseng, Hsiao-Ting Kao and Feng-Yi Huang Abstract: In a decision support system for a complete product life cycle management, both assembly planning and disassembly planning need to be considered for producing an assembled product. To produce an assembled product, an assembly planning scheme is required to generate a proper assembly sequence with which the components can be grouped and fixed to construct a final product. At the end of the product life cycle, a disassembly planning scheme is performed to generate a disassembly sequence to disassemble and recycle the product. In this research, a new decision support system for a complete product life cycle management by integrating assembly and disassembly planning is presented. First, the spatial relationships of the components and the precedence of the assembly and disassembly operations are analyzed. Second, a genetic algorithm approach is applied to evaluate the integrated assembly and disassembly costs to find the good assembly sequences and disassembly sequences. A cost function by integrating the assembly costs and disassembly costs is formulated and used as an objective function. An example product is demonstrated and discussed. The test result shows that the developed model of the decision support system is feasible and effective for integrating assembly and disassembly planning with a low integrated assembly and disassembly cost. Title: KNOWLEDGE ACQUISITION WITH OM: A HEURISTIC SOLUTION Author(s): Adolfo Guzman-Arenas and Alma-Delia Cuevas-Rasgado Abstract: Knowledge scattered through the Web inside unstructured documents (text documents) can not be easily interpreted by computers. To do so, knowledge contained in a document must be extracted by a parser or a person and poured into a suitable data structure, the best of which for this purpose are ontologies. By appropriate merging of these “individual” ontologies, taking into account repetitions, redundancies, synonyms, meronyms, different level of details, different viewpoints of the concepts involved, and contradictions, a large and useful ontology could be constructed. This paper presents OM, an automatic ontology merger that achieves the fusion of two ontologies without human intervention. By repeated application of OM, an ever growing ontology can be constructed about a given area of knowledge. Current uses of OM give hope to achieve automatic knowledge acquisition. Two tasks remain: the conversion of a given text to its corresponding ontology (by a combination of syntactic and semantic analysis) is not yet automatically done; and the exploitation of the large resulting ontology is still under development. Title: A TOOL OF DECISION SUPPORT FOR THE NATURAL RISK MANAGEMENT Author(s): Nadia Abdat and Zaia Alimazighi Abstract: In this paper, we propose a space balanced scorecard for the seismic risk management. Indeed, to install an effective policy of prevention against the natural risk, the phenomenon should be controlled. This can be carried out only after a good knowledge of this last. From where, the need for having a large volume of information coming from various sources. The Geographical information systems (GIS) are largely used for the decision support. However, they give a rather static vision whereas the management of an environmental process in general and natural risk in particular requires tools based on dynamic models. In addition, the scorecards are often used to build decision support systems. In this paper, we propose a balanced scorecard for the management of a seismic risk which is established on the basis of spatial indicators Title: A BI-CRITERIA SCHEDULING FRAMEWORK FOR THE SUPPLY CHAIN MANAGEMENT OF MOBILE PROVIDERS Author(s): Göktürk Gezer, İlker Yaz, Hasan Mert Taymaz, Tansel Özyer and Reda Alhajj Abstract: In delivery sector answering the requests of customers quickly and efficiently became very important for recent years. Thus people who are working in delivery sector began challenging to build well settled synchronization for more complex delivering system. In this synchronization they try to keep customers information, providers (who answering requests) information, divide requests for providers and inform them about their recent jobs, provide fast and efficient delivering system. All of these lead to acquire customer pleasure with firm pleasure. We believe that using mobile device such as PDA with a host PC for building well settled synchronization with providers and host is simple but useful way. The aim of host is arranging received requests and sending them with their information to convenient providers by considering all providers position and their job intensity. So request would be answered as quickly as possible. Also providers can access information about their jobs and the shortest path for reaching these jobs by using their PDA. Our system makes all things simple and easy for people. Providers don’t have to care about how to reach their jobs and workers arrange requests don’t have to consider about convenient providers. Title: ALGORITHMS FOR ESTIMATING FOREST INVENTORY PARAMETERS FROM DATA ACQUIRED BY REMOTE SENSING METHODS Author(s): Ingus Smits and Salvis Dagis Abstract: Two technologies LiDAR (Light Detection and Ranging) and aero photography has very big potential in forest taxation, which is a process of gathering different parameters of specific region. Both technologies can be used for finding different parameters of interest, such as number of trees, tree height and other. This paper presents usage analyze results for LiDAR and aero photography and describes their possibilities. Also it contains analyze of tree identifying algorithms and describes ways of their employment in different processes. Title: A DATA MINING METHOD BASED ON THE VARIABILITY OF THE CUSTOMER CONSUMPTION - A SPECIAL APPLICATION ON ELECTRIC UTILITY COMPANIES Author(s): Félix Biscarri, Ignacio Monedero, Carlos León, Juán I. Guerrero, Jesús Biscarri and Rocío Millán Abstract: This paper describes a method proposed in order to recover electrical energy (lost by abnormality or fraud) by means of a data mining analysis based in outliers detection. It provides a general methodology to obtain a list of abnormal users using only the general customer databases as input. The hole input information needed is taken exclusively from the general customers’ database. The data mining method has been successfully applied to detect abnormalities and fraudulencies in customer consumption. We provide a real study and we include a number of abnormal pattern examples. Title: MASSIVE PARALLEL NETWORKS OF EVOLUTIONARY PROCESSORS AS NP-PROBLEM SOLVERS Author(s): Nuria Gómez Blas, Luis F. de Mingo and Eugenio Santos Abstract: This paper presents a new connectionist model that might be used to solve NP-problems. Most well known numeric models are Neural Networks that are able to approximate any function or classify any pattern set provided numeric information is injected into the net. Concerning symbolic information new research area has been developed, inspired by George Paun, called Membrane Systems. A step forward, in a similar Neural Network architecture, was done to obtain Networks of Evolutionary Processors (NEP). A NEP is a set of processors connected by a graph, each processor only deals with symbolic information using rules. In short, objects in processors can evolve and pass through processors until a stable configuration is reach. Despite their simplicity, we show how the latter networks might be used for solving an NP-complete problem, namely the 3-colorability problem, in linear time and linear resources (nodes, symbols, rules). Title: WHY HEIDEGGER? - CRITICAL INSIGHTS FOR IS DESIGN FROM PRAGMATISM AND FROM SOCIAL SEMIOTICS Author(s): Ângela Lacerda Nobre Abstract: Martin Heidegger’s ontology represents a landmark in terms of how human knowledge is theorised. Heidegger’s breakthrough achievement is to consider scientific knowledge as a particular case of the broader being-in-the-world instance. Science develops without needing to acknowledge this dependence though in times of crisis, when previous approaches are no longer effective, it is the link with daily experience that enables the rethinking of earlier assumptions. This valorisation of quotidian practices and the centrality of experience and of informal knowledge – the prereflexive work – in terms of being the antecedents of formal and explicit knowledge, has profound consequences regarding the creation of organisational information systems. The American School of Pragmatism, developed by Charles Sanders Peirce, had previously argued in similar lines in terms of the non-severing of the dual relations such as theory/practice or individual/social. In later times, Social Semiotics, also developed under the same implicit assumptions, where the individual and the social dimensions of human reality are mutually determined. These arguments have been established for long as being relevant for information systems design by several authors. However, there is an obvious lack of understanding of the kernel role of such theories in current mainstream research. Concrete approaches to organisational learning - such as Semiotic Learning - are an example of the huge potential that lies largely unexplored under the umbrella of socio-philosophy. Title: ARCHCOLLECT - A TOOL FOR WEB USAGE KNOWLEDGE ACQUISITION FROM USER’S INTERACTIONS Author(s): Ahmed Ali Abdalla Esmin, Joubert de Castro Lima, Edgar Toshiro Yano and Tiago Garcia de Senna Carneiro Abstract: This paper presents a low coupled acquisition mechanism focused on users interactions, associated with semantic data. This tool is used for collecting, transforming, loading and displaying user interactions, called ArchCollect. It is composed by seven components gather information coming directly from the user, independent on the user monitored applications. The ArchCollect architecture has a relational model with capacity for keeping important information for two main areas: the commerce with products or services, quantities and prices, and applications with process, quantities, prices and employees. The relational model also added the possibility of obtaining the time spent to serve each user interaction on the application servers and on the ArchCollect architecture servers. In this architecture, data extraction and the data analysis are performed either by specific mechanisms, or by decision making tools, such as, OLAP, Data Mining and Statistics tools. Title: NEW APPROACHES TO CLUSTERING DATA - USING THE PARTICLE SWARM OPTIMIZATION ALGORITHM Author(s): Ahmed Ali Abdalla Esmin and Dilson Lucas Pereira Abstract: This paper proposes two new approaches to using the Particle Swarm Optimization Algorithm (PSO) to cluster data. It is shown how PSO can be used to find the centroids of a user specified number of clusters. These proposed approaches using different finenesses functions and considered the situation where the data is uniformly distributed. The PSO algorithm with these finenesses functions is evaluated on well known benchmarks data sets. Results show that significant improvements were achieved by using the proposed modifications and the PSO clustering techniques have much potential. Area 3 - Information Systems Analysis and Specification Title: INFORMATION SYSTEM PROCESS INNOVATION EVOLUTION PATHS Author(s): Erja Mustonen-Ollila and Jukka Heikkonen Abstract: This study identifies Information System process innovations’ (ISPIs) evolution paths in three organisations using a sample of 213 ISPI development decisions over a period that spanned four decades: early computing (1954-1965); main frame era (1965-1983); office computing era (1983-1991), and distributed applications era (1991-1997). These follow roughly Friedman’s and Cornford’s categorisation of IS development eras. Four categories of ISPI’s are distinguished: base line technologies, development tools, description methods, and managerial process innovations. Inside the ISPI categories ISPI evolution paths are based on the predecessor and successor relationships of the ISPIs over time. We analyse for each era the changes and the dependencies between the evolution paths over time. The discovered dependencies were important in understanding that the changes on ISPIs are developed through many stages of evolution over time. It was discovered that the dependencies between the evolution paths varied significantly according to the three organisations, the four ISPI categories, and the four IS development eras. Title: NEW DIRECTIONS FOR IT GOVERNANCE IN THE BRAZILIAN GOVERNMENT Author(s): Fabio Perez Marzullo, Carlos H. A. Moreira, Jano Moreira de Souza and José Roberto Blaschek Abstract: This paper presents an IT Governance Cycle and a Competency Model, that are being developed to identify the intellectual capital and the strategic actions needed to implement an efficient IT Governance program in the Brazilian Government. This work in progress is driven by the premise that organization’s human capital must adhere to a set of core competencies in order to correctly prioritize and achieve business results that, regarding government issues, relates to public resources administration. It is now widely accepted that IT Governance may help the organization to succeed on its business domain; consequently, through effective investment policies and correct IT decisions the organization can align business needs with IT resources, achieving highly integrated business services. These new directions intend to propose an IT Governance model capable to improve IT strategies of government organizations, and to propose core competencies and an IT Governance Cycle, which altogether defined a Governance Framework. Title: A PROCESS FOR DETERMINING USER REQUIREMENTS IN ECRM DEVELOPMENT - A STRATEGIC ASPECT AND EMPIRICAL EXAMINATION Author(s): Ing-Long Wu and Ching-Yi Hung Abstract: Customer relationship management (CRM) has incerasingly become important while business focus has shifted from product-centric to customer-centric,. However, many organizations fail to achieve the objectives. One of the important determinants is the deployment of electronic CRM (eCRM) in organizations. In essence, CRM is complex in comprising product, channel, consumer, delivery, and service aspects. This requires different approaches in eCRM development. Determining user requirement is the most important phase and the key to the final success in system use. This research proposes a strategy-based process for system requirement analysis. Previous research has not discussed the important role of the CRM strategies in building eCRM. Moreover, the implemenation process was only reported partially in literature. Basically, the framework contains three steps: (1) define CRM strategies, (2) identify consumer and marketing characteristics, and (3) determine system requirements. This framework is further examined by empirical data. The results indicate that CRM strategies have positive impact on system requirement analysis while developing eCRM. Title: MODELING UNIT TESTING PROCESSES - A SYSTEM DYNAMICS APPROACH Author(s): Kumar Saurabh Abstract: Software development is a complex activity that often exhibits counter-intuitive behavior, in that outcomes often vary quite radically from the intended results. The production of a high quality software product requires application of both defect prevention and defect detection techniques. A common defect detection strategy is to subject the product to several phases of testing such as unit, integration, and system. These testing phases consume significant project resources and cycle time. As software companies continue to search for ways for reducing cycle time and development costs while increasing quality, software testing processes emerge as a prime target for investigation. This paper presents a system dynamics model of software development, better understanding testing processes. Motivation for modeling testing processes is presented along with a an executable model of the unit test phase. Some sample model runs are described to illustrate the usefulness of the model. Title: PRODUCTION CONFIGURATION OF PRODUCT FAMILIES - AN APPROACH BASED ON PETRI NETS Author(s): Lianfeng Zhang, Brian Rodrigues, Jannes Slomp and Gerard J. C. Gaalman Abstract: Configuring production processes based on process platforms has been recognized as an effective means for companies to produce product variety while maintaining stable production. Production configuration involves diverse process elements, e.g., machines, operations, a high variety of component parts and assemblies along with the constraints and restrictions among them. It essentially entails the selection of proper process elements, the subsequent arrangement of selected elements into processes and the final evaluation of the configured multiple alternatives to determine an appropriate one. In an attempt to assist companies to better understand and implement production configuration, we study the underlying logic for configuring processes based on a process platform by means of dynamic modelling and visualization. Accordingly, we develop a formalism of nested colored timed Petri nets (PNs) to model production configuration. To accommodate the modelling difficulties resulting from the fundamental issues in production configuration, three types of nets, namely process nets, assembly nets and manufacturing nets, together with a nested net system are defined by adopting the basic principles of colored PNs, nested PNs and timed PNs. The paper demonstrates how the proposed formalism is applied to specify production processes at different levels of abstraction to achieve production configuration. Title: AN EXTREME PROGRAMMING RELEASE PLAN THAT MAXIMIZES BUSINESS PERFORMANCE Author(s): Marcelo C. Fernandes, Antonio Juarez Alencar and Eber Assis Schmitz Abstract: This paper proposes a multi-criteria method to evaluate the value of XP release plans to business. The method, which is based upon information economics and stochastic modeling, helps XP practitioners to select the release plan that maximizes business performance, with considerable consequences for the use of information technology as a competitive advantage tool. Title: DEFINING THE IMPLEMENTATION ORDER OF SOFTWARE PROJECTS IN UNCERTAIN ENVIRONMENTS Author(s): Eber Assis Schmitz, Antonio Juarez Alencar, Marcelo C. Fernandes and Carlos Mendes de Azevedo Abstract: In the competitive world in which we live, where every business opportunity not taken is an opportunity handed to competitors, software developers have distanced themselves, in both language and values, from those who define the requirements that software has to satisfy and come up with the money that funds its development process. Such a distance helps to reduce or, in some cases, completely eliminate the competitive advantage that software development may provide to an organization; transforming this value creation activity into a business cost that is better kept low and under tight control. This article proposes a method for obtaining the optimal implementation order of software units in an information technology development project. This method, which uses a combination of heuristic procedures and Monte Carlo simulation, takes into consideration the fact that software development is generally carried out under cost and investment constraints in an uncertain environment, whose proper analysis indicate how to obtain the best possible return on investment. The article shows that decisions made under uncertainty may be substantially different from those made in a risk-free environment. Title: MODELLING & SIMULATION OF A VIRTUAL CAMPUS - A CASE STUDY REGARDING THE OPEN UNIVERSITY OF CATALONIA Author(s): Angel A. Juan, Pau Fonseca, Joan M. Marquès, Xavi Vilajosana and Javier Faulin Abstract: In this paper we present a case study regarding the modelling and simulation of a real computer system called Castelldefels. This system gives support to the Virtual Campus of the Open University of Catalonia (UOC), an online university that offers e-learning services to thousands of users. After analyzing several alternatives, the OPNET software was selected as the convenient tool for developing this network-simulation research. The main target of the project was to provide the computer system’s managers with a realistic simulation model of their system. This model would allow the managers: (i) to analyze the behaviour of the cur-rent system in order to discover possible performance problems such as bottlenecks, weak points in the structure, among others, and (ii) to perform what-if analysis regarding future changes in the system, including the addition of new Internet-based services, variations in the number and types of users, changes in hardware or software components. Title: WORKFLOW TREES FOR REPRESENTATION AND MINING OF IMPLICITLY CONCURRENT BUSINESS PROCESSES Author(s): Daniel Nikovski Abstract: We propose a novel representation of business processes called workflow trees that facilitates the mining of process models where the parallel execution of two or more sub-processes has not been recorded explicitly in workflow logs. Based on the provable property of workflow trees that a pair of tasks are siblings in the tree if and only if they have identical respective workflow-log relations with each and every remaining third task in the process, we describe an efficient business process mining algorithm of complexity only cubic in the number of process tasks, and analyze the class of processes that can be identified and reconstructed by it. Title: ON CONCEPTUALIZATION AS A SYSTEMATIC PROCESS Author(s): A. J. J. van Breemen and Janos J. Sarbo Abstract: We are concerned with the early phases of ER-modeling consisting in the primary conceptualization of the underlying application domain. To this end we introduce a process model for the generation of a domain description. In virtue of its close relation with cognitive activity, this model enables the modeler as well as the user to comprehend the concepts of the resulting domain in a natural way. Due to its uniform representation potential, our approach enables knowledge obtained from different stake holders during a conceptualization process to be combined in an efficient way. Our objective is not the presentation of a modeling method that rivals ER or ORM modeling techniques, that are suited for modeling the domain as it actually is (descriptive). Our objective is the development of a method that can model the domain as it ought to become (prescriptive) and still offers means to measure the semantic quality of the resulting model. Title: GENERALIZATION AND BLENDING IN THE GENERATION OF ENTITY-RELATIONSHIP SCHEMAS BY ANALOGY Author(s): Marco A. Casanova, Simone D. J. Barbosa, Karin K. Breitman and Antonio L. Furtado Abstract: To support the generation of database schemas of information systems, a five-step design process is proposed that explores the notions of generic and blended spaces and favours the reuse of predefined schemas. The use of generic and blended spaces is essential to achieve the passage from the source space into the target space in such a way that differences and conflicts can be detected and, whenever possible, conciliated. The convenience of working with multiple source schemas to cover distinct aspects of a target schema, as well the possibility of creating schemas at the generic and blended spaces, are also considered. Title: VARIABILITY MANAGEMENT IN SOFTWARE PRODUCT LINES FOR DECISION SUPPORT SYSTEMS CONSTRUCTION Author(s): María Eugenia Cabello and Isidro Ramos Abstract: This paper presents software variability management in complex cases of Software Product Lines where two kinds of variabilities emerge: domain variability and application variability. We illustrate the problem by means of a case study in the Decision Support Systems domain. We have dealth with the first one using variability points that are captured using decision-tree techniques in order to select base architectures and the second one by decorating the base architectures with the features of the application domain. In order to present this variability management, we focus on the diagnostic domain, a special case of Decision Support Systems. A generic solution for the automatic construction of systems of this kind is given using our approach: Base-Line Oriented Modeling (BOM). Title: COMBINING DIFFERENT CHANGE PREDICTION TECHNIQUES Author(s): Daniel Cabrero, Javier Garzás and Mario Piattini Abstract: This work contributes to software change prediction research and practice in three ways. Firstly, it reviews and classifies the different types of techniques used to predict change. Secondly, it provides a framework for testing those techniques in different contexts and for doing so automatically. This framework is used to find the best combination of techniques for a specific project (or group of projects) scenario. In third place, it provides a new prediction technique based on what the expectation of change is, from the user’s point of view. This new proposal is based on a gap found in the relevant research, during the course of a review of the relevant literature. Title: A COMPARATIVE STUDY OF DOCUMENT CORRELATION TECHNIQUES FOR TRACEABILITY ANALYSIS Author(s): Anju G. Parvathy, Bintu G. Vasudevan and Rajesh Balakrishnan Abstract: One of the important aspects of software engineering is to ensure traceability across the development lifecycle. Traceability matrix is widely used to check for completeness and to aid impact analysis. We propose that this computation of traceability can be automated by looking at the correlation between the documents. This paper describes and compares four novel approaches for traceability computation based on text similarity, term structure and inter-document correlation algorithms. These algorithms base themselves on different information retrieval techniques for establishing document correlation. Observations from our experiments are also presented. The consequent comparative study helps us conclude on the most acceptable technique of document correlation (amongst all techniques proposed) for traceability analysis. The advantages and disadvantages of each of these approaches are discussed in detail. Various scenarios where these approaches would be applicable and the future course of action are also discussed. Title: DELIVERING ACTIONABLE BUSINESS INTELLIGENCE THROUGH SERVICE-ORIENTED ARCHITECTURE Author(s): Zeljko Panian Abstract: The paper discusses the main characteristics of Service-oriented Architecture and examines the feasible ways of using business intelligence solutions as Web services in an SOA environment. With the evolution of Web services, organizations are becoming more sophisticated in their goals for and requirements of this technology as it offers faster and more flexible deployment, customization and easy integration of BI solutions. Those organizations that choose a Web services strategy will be best positioned to deliver BI content across and beyond the enterprise, making BI accessible to everyone, wherever they work, at a lower cost and in more innovative ways.. Title: SELF-ADAPTIVE CUSTOMIZING WITH DATA MINING METHODS - A CONCEPT FOR THE AUTOMATIC CUSTOMIZING OF AN ERP SYSTEM WITH DATA MINING METHODS Author(s): Rene Schult and Gamal Kassem Abstract: The implementation of an ERP system is a long and cost intensive process. Functions of the ERP system, which are delivered in an enterprise neutral but sector specific fashion need to be adjusted to the specific business requirements of an enterprise. Exact knowledge of the ERP system is required because each ERP system has its own technical concepts and terminologies. Therefore many enterprises employ ERP system experts in order to customise the ERP system to be introduced as well as to further enhance the customisation after its introduction. A concept for the implementation of a Self-Adaptive ERP System should allow for the automatic customisation of an ERP system on the basis of the of enterprise process models provided and analysis of the ERP system usage. Title: DIAPASON: A FORMAL APPROACH FOR SUPPORTING AGILE AND EVOLVABLE INFORMATION SYSTEM SERVICE-BASED ARCHITECTURES Author(s): Hervé Verjus and Frédéric Pourraz Abstract: This paper presents a novel approach called Diapason, for expressing, verifying and deploying evolvable information system service-based architectures. Diapason promotes a pi-calculus-based layered language and a corresponding virtual machine for expressing and executing dynamic and evolvable services orchestration that support agile business processes. Information system service-based architecture may dynamically evolve, whatever the changes may occur during the services orchestration lifecycle. Title: A PROCESS-DRIVEN METHODOLOGY FOR CONTINUOUS INFORMATION SYSTEMS MODELING Author(s): Alfredo Cuzzocrea, Andrea Gualtieri and Domenico Saccà Abstract: In this paper we present a process-driven methodology for continuous information systems modeling. Our approach supports the whole information system life-cycle, from planning to implementation, and from usage to re-engineering. The methodology includes two different phases. First, we produce a scenario analysis adopting a Process-to-Function approach to capture interactions among components of the organization, information ad processes. Then, we produce a requirement analysis adopting a Function-for-Process and package-oriented approach. Finally, we deduce an ex-post scenario analysis by applying process mining techniques on repositories of process execution traces. The whole methodology is supported by UML diagrams organized in a Business Model, a Conceptual Model, and an Implementation Model. Title: IS THERE A ROLE FOR PHILOSOPHY IN GROUP WORK SUPPORT? Author(s): Roger Tagg Abstract: There appears to be evidence that much potential IT support for group work is yet to be widely adopted or to achieve significant benefits. It has been suggested that, in order to achieve better results when applying IT to group work, designers should take more notice of modern philosophies that avoid the so-called "Cartesian Dualism" of mind separated from matter. It is clear that group work support is more than a matter of automating formal procedures. This paper reviews the question from the author’s lifetime of experience as a consultant, academic and group worker; proposes some models to address some of the missing perspectives in current approaches; and suggests how future efforts could be re-orientated to achieve better outcomes. Title: ON-THE-FLY AUTOMATIC GENERATION OF SECURITY PROTOCOLS Author(s): Shinsaku Kiyomoto, Haruki Ota and Toshiaki Tanaka Abstract: In this paper, we presented an automatic generation tool for authentication and key exchange protocols. Our tool generates a security protocol definition file from input data of requirements for the protocol. The tool constructs a new protocol that combines a pieces of security protocols as building blocks. The transaction time of the tool is less than one second and it is able to realize more rapid generation of security protocols in comparison to existing generation tools. Title: AN OVERVIEW OF THE OSTRA FRAMEWORK FOR THE TRANSITION TO SOA Author(s): Fabiano K. T. Tiba, Shuying Wang and Miriam A. M. Capretz Abstract: Service-Oriented Architecture is an emerging paradigm to build applications as a collection of services that can be coordinated to provide flexible applications. At its full potential, the development of services is considered from the perspective of the enterprise to deliver services that can be reused across applications and provide flexible solutions. An enterprise-wide SOA has the potential to enable organizations to adapt faster in order to meet the changing needs of both the market and its customers as well as to increase the quality of their solutions. However, the achievement of an enterprise-wide SOA is challenging, as it remains unclear how organizations should evolve towards the SOA paradigm. This paper discusses issues related to transition to SOA and provides an overview of OSTRA - a framework which provides a realistic approach for the transition to SOA, by considering short-term and long-term goals and balancing planning and management with practical experimentation. Title: SUPPORT DISCIPLINES FOR SYSTEMS DEVELOPMENT IN SMES - A CONCEPTUAL MAP Author(s): Luis E. Mendoza, María Pérez, Edumilis Méndez and Wilfredo Báez Abstract: Both software configuration management (SCM) and project management (PM) involve applying knowledge, skills, tools and techniques which support the development of software systems (SS). Developing an effective project management plan, which minimizes risks and restrictions inherent to the project, becomes more difficult as time goes on. Achieving an effective balance between the scope, timeframe, and costs associated with the project is also a complex task. This task is equally challenge for the small and medium-sized enterprises (SMEs). SCM is a powerful tool for the administration and control of the life cycle of the SS, and it is linked to the process of quality assurance. This article presents the first results of an ongoing research whose purpose is to develop a framework which incorporates the methodological aspects of PM and SCM for SMEs. The first step for the design of this framework was to build a conceptual model that would be used as a systemic vision of the semantic basis to unify and support the relationships between these concepts. Title: KNOWLEDGE ELICITATION TECHNIQUES FOR DERIVING COMPETENCY QUESTIONS FOR ONTOLOGIES Author(s): Lila Rao, Han Reichgelt and Kweku-Muata Osei-Bryson Abstract: This research explores the applicability of existing knowledge elicitation techniques for the development of competency questions for ontologies. This is an important area of research as competency questions are used to evaluate an ontology. The use of appropriate knowledge elicitation techniques increases the likelihood that these competency questions will be reflective of what is needed of the ontology. It thus helps ensure the quality of the ontology (i.e. the competency questions will adequately reflect the end users requirements). Title: MODELING INCREASINGLY COMPLEX SOCIO-TECHNICAL ENVIRONMENTS Author(s): I. T. Hawryszkiewycz Abstract: The paper focuses on modeling large open information systems. These are systems composed of many activities include relationships between activity participants to create new knowledge and services. The systems are further complicated by the changing nature of both the activities and relationships. The paper proposes increased emphasis on modelling work and social structures and using the models to generate role based interfaces. It illustrates the application to the design of complex outsourcing systems Title: INCREMENTAL ONTOLOGY INTEGRATION Author(s): Thomas Heer, Daniel Retkowitz and Bodo Kraft Abstract: In many areas of computer science ontologies become more and more important. The use of ontologies for domain modeling often brings up the issue of ontology integration. The task of merging several ontologies, covering specific subdomains, into one unified ontology has to be solved. Many approaches for ontology integration aim at automating the process of ontology alignment. However, a complete automation is not feasible, and user interaction is always required. Nevertheless, most ontology integration tools offer only very limited support for the interactive part of the integration process. In this paper, we present a novel approach for the interactive integration of ontologies. The result of the ontology integration is incrementally updated after each definition of a correspondence between ontology elements. The user is guided through the ontologies to be integrated. By restricting the possible user actions, the integrity of all defined correspondences is ensured by the tool we developed. We evaluated our tool by integrating different regulations concerning building design. Title: DESIGNING BUSINESS PROCESS MODELS FOR REQUIRED UNIFORMITY OF WORK Author(s): Kimmo Tarkkanen Abstract: Business process and workflow models play important role in developing information system integration and later training of its usage. New ways of working and information system usage practices are designed with as-is and to-be process models, which are implemented into system characteristics. However, after the IS implementation the work practices may become differentiated. Variety of work practices on same business process can have unexpected and harmful social and economic consequences in IS-mediated work environment. This paper employs grounded theory methodology and a case study to explore non-uniformity of work in a Finnish retail business organization. By differentiating two types of non-uniform work tasks, the paper shows how process models can be designed with less effort, yet maintaining the required amount of uniformity by the organization and support for employees’ uniform actions. In addition to process model designers, the findings help organizations struggling with IS use practices’ consistency to separate practices that may emerge most harmful and practices that are not worth to alter. Title: A CONCEPTUAL MODEL FOR GROUP SUPPORT SYSTEMS IN LOCAL COUNCILS Author(s): Joycelyn Harris and Lily Sun Abstract: Government organisations have a unique culture due to their social obligations and public accountability. It is generally agreed that e-government implies the use of information communication technologies (ICT) for the provision of public services, the improvement of managerial effectiveness and the promotion of democratic values and mechanisms. Most discussions about e-government focus on the use of ICT to improve the relationship between government and citizen, but not the relationship between government and employees. Government collaboration takes places within a specific context and incentives to collaborate are embedded within institutional arrangements and management structure. This paper proposes a conceptual model to assist local council employees to perform group activities in partnership with external stakeholders in order to achieve council goals. Title: ENGINEERING MOBILE AGENTS Author(s): Arnon Sturm, Dov Dori and Onn Shehory Abstract: As the mobile agent paradigm becomes of interest to many researchers and industries, it is essential to introduce an engineering approach for designing such systems. Recent studies on agent-oriented modeling languages have recognized the need for modeling mobility aspects such as why a mobile agent moves, where the agent moves to, when it moves and how it reaches its target. These studies extend existing languages to support the modeling of agent mobility. However, these fall short in addressing some modeling needs. They lack in their expressiveness: some of them ignore the notion of location (i.e., the "where") while others do not handle all types of mobility (the "how"). Additionally, they lack in their accessibility, as the handling of the mobility aspects is separated into multiple views and occasionally the mobility aspect is tightly coupled with the functional behavior specification. View multiplicity reduces the comprehensibility and the ease of specification, whereas the coupling with the functional behavior specification reduces the flexibility of deploying a multi-agent systems in different configurations (i.e., without mobility). In this paper, we address these problems by enhancing an expressive and accessible modeling language with capabilities for specifying mobile agents. We provide the details of the extension, then illustrate the use of the extended modeling language, and demonstrate the way in which it overcomes existing problems. Title: A REQUIREMENTS ENGINEERING PROCESS MODEL FOR DISTRIBUTED SOFTWARE DEVELOPMENT - LESSONS LEARNED Author(s): Leandro Teixeira Lopes and Jorge Luis Nicolas Audy Abstract: In the growing market of global software development (GSD), requirements engineering emerges as a critical process impacted by distribution. The need of a process to address the difficulties caused by team dispersion in requirements engineering is recognized. The objective of this paper is to present lessons learned from a case study conducted to evaluate a requirements engineering process model for distributed software development. Empirical results were obtained in a multinational organization that develops software with teams distributed in a global setting. The main contribution of this paper is providing an insight in the use of a requirements engineering process model for GSD, as well as, new information on current needs for changes or revision in traditional requirements engineering models. Title: A DECISION SUPPORT SYSTEM TAYLORED FOR ROMANIAN SMALL AND MEDIUM ENTERPRISES Author(s): Razvan Petrusel Abstract: This paper presents an overview of a prototype of a real-world decision support system (DSS) that was developed in order to improve financial decisions in Romanian small and medium enterprises (SME). The goal of the paper is to show weaknesses in strategic, tactic and operative financial decision making in Romanian SME and to show how improvement is possible by use of a cash-flow based DSS and several specialised expert systems. The paper focuses mainly on requirements elicitation and on system validation. The impact and benefits of using the Information Technology in decision-making processes within the enterprise are highlighted. Title: CHALLENGES IN SOFTWARE DESIGN IN LARGE CORPORATIONS - A CASE STUDY AT SIEMENS AG Author(s): Peter Killisperger, Markus Stumpnter, Georg Peters and Thomas Stückl Abstract: Successful software development is still a challenge and improvements in software processes and their application is an active research domain. Siemens has started research projects aiming to improve Siemens’ software process related activities. The adaptation of generic software process models to project specific context and the application of instantiated processes play a major role in these efforts. Several solutions have been put forward by research in recent years to better standardise and automate these procedures. However, no approach has become a de facto standard and instantiation and application of processes in practice are still often done manually and not standardised. On the basis of an analysis of Siemens’ current practice, a New Software Engineering Framework (NSEF) is suggested for improving the instantiation and the application of software processes at Siemens. In particular, the paper suggests the development of a concept and implementation of a gradual instantiation approach. Title: BETTER IT GOVERNANCE FOR ORGANIZATIONS - A MODEL FOR IMPROVING FLEXIBILITY AND CAPABILITIES OF STRATEGIC INFORMATION SYSTEMS PLANNING (SISP) THROUGH EA AND BPR UNDER E-BUSINESS ENVIRONMENT Author(s): Jungho Yang Abstract: Within more turbulent, and increasingly globalized and digitalized environments, Strategic Information Systems Planning (SISP) has been recognized as one of the most significant factors for effective and efficient IT governance to improve organizations’ effectiveness and capabilities by changing the characteristics or overall governance of organizations. Although organizations have been introduced, various well-known methodologies for creating the SISP successfully to maximize their strategic opportunities and values, the current literatures indicate that there is no perfect and fully comprehensive methodology or model to make organizations satisfactory. The purpose of this paper is to propose a model that can complement issues of the existing model and support improved flexibility and capabilities, and at the same time minimize waste and systems inconsistency by incorporating EA, BPR and concurrent approach. On-going research will be a case study to validate the proposed model in the government of Korea and to seek for other potential issues and factors compared with other sectors. Title: SEMANTICS AND REFINEMENT OF BEHAVIOR STATE MACHINES Author(s): Kevin Lano and David Clark Abstract: In this paper we present an axiomatic semantics for UML 2 behavior state machines, and give transformation rules for establishing refinements of behavior state machines, together with proofs of the semantic validity of these rules, based on a unified semantics of UML 2. Title: A METADATA-DRIVEN APPROACH FOR ASPECT-ORIENTED REQUIREMENTS ANALYSIS Author(s): Sérgio Agostinho, Ana Moreira, André Marques, João Araújo, Isabel Brito, Ricardo Ferreira, Ricardo Raminhos, Jasna Kovačević, Rita Ribeiro and Philippe Chevalley Abstract: This paper presents a metadata-driven approach based on aspect-oriented requirements analysis. This approach has been defined in cooperation with the European Space Agency in the context of the “Aspect Specification for the Space Domain” (ASSD) project. ASSD aims at assessing the applicability and usefulness of aspect-orientation for the space domain (ground segment software projects in particular), focusing on the early stages of the software development life cycle. This paper describes a rigorous representation of requirements analysis concepts, refines a method for handling early aspects, and proposes a client/server architecture based on a metadata repository. Title: CAMEL FRAMEWORK - A FRAMEWORK FOR REALIZING COMPLETE SEPARATION OF DEVELOPER’S AND DESIGNER’S WORK IN RICH INTERNET APPLICATION Author(s): Hiroaki Fukuda and Yoshikazu Yamamoto Abstract: This paper provides a framework called Camel which is able to separate developer’s and designer's work completely in developing Rich Internet Applications. In the last years, web becomes one of the most popular methods to transfer information such as search contents, advertise information and online shopping. We usually use an appropriate web application to achieve our requirement. In the development process of current web applications, designers design　interfaces of an application by using HTML, CSS, image and developers not only implement business logics but also modify the interface for dynamic generation of HTML tags based on the executed logic. Therefore, not only designers but also developers have to work every time when design of the interface is changed. This paper describes Camel framework and its implementation, which is based on Flex, and then demonstrate its usage and utility by implementing a real application. Title: SECURITY ANALYSIS OF THE HEALTH CARE TELEMATICS INFRASTRUCTURE IN GERMANY Author(s): Michael Huber, Ali Sunyaev and Helmut Krcmar Abstract: Based on ISO 27001 for Information Security Management Systems, this paper introduces a newly developed security analysis approach, suitable for technical security analyses in general. This approach is used for a security analysis of several components and processes of the Health Care Telematics in Germany. Besides the results of the analysis, basics for further analysis and verification activities is given. Title: MANAGING PROCESS VARIANTS IN THE PROCESS LIFE CYCLE Author(s): Alena Hallerbach, Thomas Bauer and Manfred Reichert Abstract: When designing process-aware information systems, often variants of the same process have to be specified. Each variant then constitutes an adjustment of a particular process to specific requirements building the process context. Current Business Process Management (BPM) tools do not adequately support the management of process variants. Usually, the variants have to be kept in separate process models. This leads to huge modeling and maintenance efforts. In particular, more fundamental process changes (e.g., changes of legal regulations) often require the adjustment of all process variants derived from the same process; i.e., the variants have to be adapted separately to meet the new requirements. This redundancy in modeling and adapting process variants is both time consuming and error-prone. This paper presents the Provop approach, which provides a more flexible solution for managing process variants in the process life cycle. In particular, process variants can be configured out of a basic process following an operational approach; i.e., a specific variant is derived from the basic process by applying a set of well-defined change operations to it. Provop provides full process life cycle support and allows for flexible process configuration resulting in a maintainable collection of process variants. Title: ASSESSING THE QUALITY OF ENTERPRISE SERVICES - A MODEL FOR SUPPORTING SERVICE ORIENTED ARCHITECTURE DESIGN Author(s): Matthew Dow, Pascal Ravesteijn and Johan Versendaal Abstract: Enterprise Services have been proposed as a more business-friendly form of web services which can help organizations bridge the gap between the IT capabilities and business benefits of Service Oriented Architecture. However up until now there are almost no methodologies for creating enterprise services, and no lists of definite criteria which constitute a “good” enterprise service. In this paper we present a model which can aid Service Oriented Architecture designers with this by giving them a set of researched criteria that can be used to measure the quality of enterprise service definitions. The model and criteria have been constructed by interviewing experts from one of the big five consultancy firms and by conducting a literature study of software development lifecycle methods and Service Oriented Architecture implementation strategies. The results have been evaluated using a quantitative survey and qualitative expert interviews, which have produced empirical support for the importance of the model criteria to enterprise service design. The importance of business ownership and business value metrics for enterprise services is stressed, leading to suggestions of future research that links this area more closely with Service Oriented Architecture governance, Service Oriented Architecture change management, and Business Process Management. Title: MAXIMIZING THE BUSINESS VALUE OF SOFTWARE PROJECTS - A BRANCH & BOUND APPROACH Author(s): Antonio Juarez Alencar, Eber Assis Schmitz, Ênio Pires de Abreu, Marcelo Carvalho Fernandes and Armando Leite Ferreira Abstract: This work presents a branch & bound method that allows software managers to determine the optimum order for the development of a network of dependent software parts that have value to customers. In many different circumstances the method allows for the development of complex and otherwise expensive software from a relatively small investment, favoring the use of software development as a means of obtaining competitive advantage. Title: SOFTWARE OFFSHORE DEVELOPMENT - A CRITERIA DEFINITION FOR REFERENCES MODELS COMPARISON Author(s): Leonardo Pilatti and Jorge Luis Nicolas Audy Abstract: Software development has been a challenge. This complexity significantly increases when company team member are working in an offshore environment. The need for a set of processes increases after every project, including those to organize the development strategy. Reference models, maturity models and frameworks can be found in literature trying to solve this problem. However, few comparative analysis have been done in order to validate the appropriate solution. The objective of this paper is to elaborate and define a set of criteria to perform a comparative analysis of the main global software development reference models, focusing the offshore development, having as basis an extensive literature review and a case study in industry. The case study was conducted in two different sites in Brazil. Title: ITO-TRACKER - A TOOL FOR EVALUATING ITO PROJECTS BASED ON CRITICAL SUCCESS FACTORS Author(s): Edumilis Méndez, María Pérez, Luis E. Mendoza and Maryoly Ortega Abstract: Nowadays, the Latin American software industry, as it is mostly represented by Small and Medium Enterprises (SMEs), should focus on improving its service capacity towards high quality, low costs, and timely delivery. Within this context, SMEs providing Information Technology Outsourcing (ITO) services require information that allows to assess and monitor their contractual relationships and the agreements contained therein, considering a set of critical success factors that may vary depending on the type of project addressed. This article is aimed at describing the development of a tool named ITO-Tracker designed for the evaluation of ITO projects oriented to software, hardware, network and databases, based on Critical Success Factors (CSFs). ITO-Tracker offers guidance to the parties involved in the service through regular control and follow-up activities. The methodology used for developing this tool was the iterative and incremental process known as Rational Unified Process (RUP). For this purpose, we documented the development process and results from ITO-Tracker conceptualization, analysis, design and construction. Besides it is presented the evaluation of an ITO project in the software development area using the tool. Title: CRITICAL SUCCESS FACTORS TO EVALUATE INFORMATION TECHNOLOGY OUTSOURCING PROJECTS Author(s): Edumilis Méndez, María Pérez, Luis E. Mendoza and Maryoly Ortega Abstract: Nowadays, a large number of companies delegate their tasks to third parties in order to reduce costs, increase profitability, expand their horizon, and increase their competitive capacity. The level of success of such contracting and related agreements is influenced by a set of critical factors that may vary depending on the type of project addressed. This article proposes a model of Critical Success Factors (CSFs) to evaluate Information Technology Outsourcing (ITO) projects. The model is based on the ITO state-of-the-art, related technologies and different aspects that affect ITO project development. Altogether, 22 CSFs were formulated for data center, network, software development and hardware support technologies. Additionally, the model proposed was evaluated using the Features Analysis Case Study Method proposed by DESMET. The methodology applied for this research consisted of an adaptation of the Methodological Framework for Research of Information Systems, including the Goal Question Metric (GQM) approach. For model operationalization, 400 metrics were established, which allowed measuring CSFs and ensuring model effectiveness to evaluate the probability of success of an ITO project and provide guidance to the parties involved in such service. Title: A GROUP AUTHENTICATION MODEL FOR WIRELESS NETWORK SERVICES BASED ON GROUP KEY MANAGEMENT Author(s): Huy Hoang Ngo, Xianping Wu and Phu Dung Le Abstract: Group authentication provides authentication for members of communication group between services and users in insecure environments such as wireless networks. Most of the group authentication models do not consider the risk of compromised share secrets in the group under various security threats such as cryptanalysis. Although authentication key exchange in groups can benefit from group key management to minimise this risk, the group key management schemes are inefficient for authentication services. In this paper, a group authentication model for wireless networks services using a group key management is presented. The group key management is specially designed with forward secrecy and session keys for efficient and secure key exchange. Based on this secure session keys, a dynamic group authentication scheme provides a secure and efficient group authentication for wireless network users and services. Title: INTEGRATING CREATIVITY INTO EXTREME PROGRAMMING PROCESS Author(s): Broderick Crawford and Claudio León de la Barra Abstract: Human and social factors are very important in developing software and the development of new software requires the generation of novel ideas. In this paper, the Agile method called eXtreme Programming (XP) is analyzed and evaluated from the perspective of the creativity, in particular the creative performance and structure required at the teamwork level. The conclusion is that XP has characteristics that ensure the creative performance of the team members, but we believe that it can be fostered from a creativity perspective Title: DATA ENCRYPTION AND DECRYPTION USING ANZL ALGORITHM Author(s): Artan Luma and Nderim Zeqiri Abstract: What is the ANZP algorithm? It is a genuine result of our work which has been theoretically and practically proved. Through the ANZP algorithm we can change the definition of the Pythagorean Theorem which says: for p and q (one of them is odd and the other even), there is only one fundamental solution (x,y,z). Using the ANZP algorithm formulas this definition changes to: for numbers p and q (one of them is odd and the other even) there are at least two fundamental solutions (x_1,y_1,z_1) and (x_2,y_2,z_2), but there are also special cases when even three fundamental solutions are possible (x_1,y_1,z_1), (x_2,y_2,z_2) and (x_3,y_3,z_3). Based on these solutions we can easily create the encryption and decryption key. Title: INTRODUCING SERVICE-ORIENTATION INTO SYSTEM ANALYSIS AND DESIGN Author(s): Prima Gustiene and Remigijus Gustas Abstract: The conventional methods of information system analysis and design are not based on service-oriented paradigm that facilitates control of business process continuity and integrity. Service-oriented representations are quite comprehensible for business experts as well as system designers. It is reasonable to conceptualize a business process in terms of service-oriented events, before the supporting technical system is designed. UML design primitives abstract from the concrete implementation artefacts and therefore they are difficult to comprehend for business analysis experts. Separation of different modelling dimensions tends to draw attention away from the semantic consistency issues of various diagram types. The presented approach for service-oriented analysis is based just on three types of events: creation, reclassification and termination, which can also be used for the semantic integrity and consistency control. In this paper, the basic service-oriented constructs are defined. Semantics of these implementation neutral artefacts are analysed in terms of their associated counterparts that are used in object-oriented design. Title: RANKING REFACTORING PATTERNS USING THE ANALYTICAL HIERARCHY PROCESS Author(s): Eduardo Piveta, Ana Moreira, Marcelo Pimenta, João Araújo, Pedro Guerreiro and R. Tom Price Abstract: This paper describes how to rank refactoring patterns to improve a set of quality attributes of a piece of software. The Analytical Hierarchy Process (AHP) is used to express the relative importance of the quality attributes and the relative importance of refactoring patterns in regards to those selected quality attributes. This ranking of refactoring patterns can be used to focus the refactoring effort on the most beneficial patterns to the software being developed or maintained. Title: USING ASSOCIATION RULES TO LEARN CONCEPT RELATIONSHIPS IN ONTOLOGIES Author(s): Jon Atle Gulla, Terje Brasethvik and Gøran Sveia Kvarv Abstract: Ontology learning is the application of automatic tools to extract ontology concepts and relationships from domain text. Whereas ontology learning tools have been fairly successful in extracting concept candidates, it has proven difficult to detect relationships with the same level of accuracy. This paper discusses the use of association rules to extract relationships in the project management domain. We evaluate the results and compare them to another method based on tf.idf scores and cosine similarities. The findings confirm the usefulness of association rules, but also expose some interesting differences between association rules and cosine similarity methods in ontology relationship learning. Title: A FRAMEWORK FOR MONITORING AND RUNTIME RECOVERY OF WEB SERVICE-BASED APPLICATIONS Author(s): René Pegoraro, Riadh Ben Halima, Khalil Drira, Karim Guennoun and João Maurício Rosário Abstract: Service-Oriented Architecture (SOA) is an architectural style, where a collection of services communicates each to other to execute business processes. With the popularity increase of developing SOA supported by Web services and since deployed processes may change, in terms of functional and non-functional Quality of Service (QoS), we need tools to monitor, diagnose, and repair Web services. This work presents a description of a framework to support self-healing architecture to reduce the effects of Web services QoS misbehaviors in SOA style processes. Major contributions of this paper are using proxy server to measure Web services QoS and to employ some strategies to recovery the effects from transgress Web services. Title: MANAGING REQUIREMENTS CHANGE AS PLM PROCESS Author(s): Mourad Messaadia, Jacqueline Konaté and Abd-El Kader Sahraoui Abstract: The work is on PLM issues within a systems engineering framework. The work considered is mainly focused on requirements change issue how to integrate it into the PLM information systems. The contribution is in two steps. Firstly, analyze requirements evolution engineering in terms of collaboration processes. Secondly, considering both the final product and requirements change impact on enabling product; the initial approach is a preliminary and illustrated by a case study. Title: REPRESENTATION AND REASONING MODELS FOR C3 ARCHITECTURE DESCRIPTION LANGUAGE Author(s): Abdelkrim Amirat and Mourad Oussalah Abstract: Component-based development is a proven approach to manage the complexity of software and its need for customization. At an architectural level, one describes the principal system components and their pathways of interaction. So, Architecture is considered to be the driving aspect of the development process; it allows specifying which aspects and models in each level needed according to the software architecture design. Early Architecture description languages (ADLs), nearly exclusive, focus on structural abstraction hierarchy ignoring behavioural description hierarchy, conceptual hierarchy, and metamodeling hierarchy. In this paper we focus on those four hierarchies which represent views to appropriately “reason about” software architectures described using our C3 metamodel which is a minimal and complete architecture description language. In this paper we provide a set of mechanisms to deal with different levels of each hierarchy, also we introduce our proper structural definition for connector’s elements deployed in C3 Architectures. Title: CONTEXT-ORIENTED WEB METHODOLOGY WITH A QUALITY APPROACH Author(s): Anna Grimán, María Pérez, Maryoly Ortega and Luis E. Mendoza Abstract: Dependency on Web systems and applications has increased in recent years. Their use and quality have gained relevance and demands in this field have significantly increased in time. This has driven to the application of processes that include considerations related to business dynamism and quality expectations for the final product. This work is aimed at describing a methodology for Web applications development that ensures quality at all phases and is especially useful in small and medium-sized projects regardless of the platform or architecture used. We performed an analysis of the main existing methodologies that allowed us to extract the best practices known and combining them in the proposed solution. Comparison between agile and plan-driven methodologies established the most suitable process model for this type of development. As a result thereof, a context-oriented web methodology -COWM- was obtained, including best practices to ensure quality throughout the whole process. Finally, a COWM evaluation was performed on a case study in order to prove its applicability and efficiency for Web systems development. Title: MODELS FOR PARALLEL WORKFLOW PROCESSING ON MULTI-CORE ARCHITECTURES Author(s): Thomas Rauber and Gudula Rünger Abstract: The advent of multi-core processors offers ubiquitous parallelism and a new source of powerful computing resources for all kinds of software products. However, most software systems, especially in business computing, are sequential and cannot exploit the new architectures. Appropriate methodologies and models to includes parallel features into business software are required. In this article, we consider workflow systems with explicit workflow definitions and explore the possibilities to define parallel and concurrent executions in business processes for an implementation on multi-core systems. The article also also presents a parallel execution model for workflows and addresses the scheduling of workflow tasks for multi-core architectures. Title: AN APPROACH TO SUPPORT THE STRATEGIC ALIGNMENT OF SOFTWARE PROCESS IMPROVEMENT PROGRAMS Author(s): André Luiz Becker, Jorge Luis Nicolas Audy and Rafael Prikladnicki Abstract: The alignment between the strategy of a software process improvement program and the organizations’ business strategies has been mentioned as a critical factor of success. However, the main software process reference models do not explicitly guide the companies towards defining processes which meet their strategic goals. Based on this context, the purpose of this paper is to present a process for the strategic alignment of software process improvement programs. Preliminary results indicate benefits beyond the demands of a software process reference model itself, which include the planning and execution of a software process improvement program taking into account the organization’s strategic goals in a more systematic way. Title: SORCER: COMPUTING AND METACOMPUTING INTERGRID Author(s): Michael Soblewski Abstract: This paper investigates Grid computing from the point of view three basic computing platforms. The platforms considered consist of virtual compute resources, a programming environment allowing for the development of grid applications, and a grid operating system to execute user programs and to make solving complex user problems easier. Three platforms are discussed: compute Grid, metacompute Grid and intergrid. Service protocol-oriented architectures are contrasted with service object-oriented architectures, then the current SORCER metacompute Grid based on the service object-oriented and new metacomputing exertion-based paradigm is described and analyzed. Finally, we explain how SORCER, with its core services and federated file system, can be used as a traditional compute Grid and an intergrid—a hybrid of compute and metacompute Grids. Title: ROUND-TRIP ENGINEERING OF WEB APPLICATIONS FOCUSING ON DYNAMIC MODELS Author(s): Yuto Imazeki, Shingo Takada and Norihisa Doi Abstract: Enterprise information systems take many forms, one of which is Web applications. The demand for rapid development of such Web applications is becoming stronger, but there is still no good way. Round-trip engineering is a software development method that iterates between the modeling phase and coding phase, allowing for incremental and rapid development. However conventional tools only support static models such as class diagrams. We thus propose a tool for round-trip engineering of Web applications that supports dynamic models such as sequence diagrams and statecharts. We introduce a navigation model to model the navigation between Web pages. This model is used to link the various diagrams as well as to generate source code. We describe a case study to show the effectiveness of our tool. Title: SIMILARITY MATCHING OF BUSINESS PROCESS VARIANTS Author(s): Noor Mazlina Mahmod, Shazia Sadiq and Ruopeng Lu Abstract: Evidence from business work practice indicates that variance from prescribed business process models is not only inevitable and frequent, but is in fact a valuable source of organizational intellectual capital that needs to be captured and capitalized, since variance is typically representative of preferred and successful work practice. In this paper, we present a framework for harnessing the value of business process variants. An essential aspect of this framework is the ability to search and retrieve variants. This functionality requires variants to be matched against a given criteria. The focus of this paper is on the structural criteria which is rather challenging as query process structures may have different levels of similarity with variant process structures. The paper provides methods for undertaking the similarity matching and subsequently providing ranked results in a systematic way, as well as a reference architecture within which the methods may be deployed. Title: TOWARDS AN EXTENDED ALIGNMENT MODEL FOR A COMPLETE ALIGNMENT OF MANUFACTURING INFORMATION SYSTEMS Author(s): Oscar Avila, Virginie Goepp and François Kiefer Abstract: Today companies need flexible and adaptable Information Systems (IS) to support their business strategies and organisational processes, as well as to facilitate their adaptation to changes of environment. In particular, in the manufacturing sector companies need flexible IS to make the integration of infrastructures easier, support production management and be able to respond to the evolution of support technologies. To deal with these specific requirements the alignment between the manufacturing IS with the organisation's strategy and its outside environment is necessary. However, most of research in the IS alignment field is related to the alignment of IS with organisation's strategy, neglecting the alignment with the external environment. Thus, in order to support complete alignment of manufacturing IS we propose in this paper an extension to the Strategic Alignment Model (SAM) of Henderson and Venkatraman (1999). The extended SAM allow the definition of possible alignment perspectives that accentuate elements to be aligned and the alignment sequence. This set of perspectives and the extended SAM are a first step towards tackling the alignments with the strategy and with the environment. Title: ASSESSING REPUTATION AND TRUST IN SCIENTIFIC GRIDS Author(s): Andrea Bosin, Nicoletta Dessi and Balachandar Ramachandriya Amarnath Abstract: Up to now, reputation and trust are widely acknowledged to be important for business environments, but little attention has been placed in security aspects of Grids devoted to scientific applications. This paper addresses these aspects and proposes both a methodology and a model for assessing the reputation of computing resources belonging to a scientific Grid. Because the overall performance of the Grid depends on the quality of service given by each single resource, the resource reputation is a measure of trustworthiness, in the sense of reliability and is asserted on a set of properties, namely the operative context, that express the resource capability in providing available and reliable services. Unlike the business applications, we are not interested in assessing the reputation of a single resource but in finding a set of resource that have similar capabilities of successfully executing a specific job. Toward this end, the paper proposes a reputation model to cluster Grid resources in groups that exhibit similar behavioural patterns and share similar operative contexts. Simulation results show the effectiveness of the proposed approach and the possible integration of such a model in a real Grid. Title: BRIDGING UNCERTAINTIES GAPS IN SOFTWARE DEVELOPMENT PROJECTS Author(s): Deniss Kumlander Abstract: The uncertainties is a well known factor affecting the final result of nearly any software project nowadays. Their negative impact is either a misfit between customers’ expectations and released software or extra efforts that software vendors have to invest into the development process. The paper presents some novel approaches to uncertainties handling including an ambassador driven communication, discussion groups and varying length internal cycles with software demonstration meetings. Title: ACTION-BASED ANALYSIS OF BUSINESS PROCESSES Author(s): Reuven Karni and Maya Lincoln Abstract: BPM identifies, comprehends and manages business processes both within and across organizational units. This requires understanding the totality and interrelationships of an enterprise process suite – a complex undertaking, given the number of processes involved. We suggest a method for supporting the management of a business process suite from an action-based viewpoint. We abstract the action verb from each process descriptor, and perform a verb frequency analysis. Through a Pareto approach those common to a large number of processes are identified. Semantic analysis of these significant actions provides four directions for BPM support: locating actions in the planning, execution or control domain; identifying common procedures to be implemented; ascertaining where operational coordination and consistency is required between organizational units. Title: FORMAL VERIFICATION OF THE SECURE SOCKETS LAYER PROTOCOL Author(s): Llanos Tobarra, Diego Cazorla, J. José Pardo and Fernando Cuartero Abstract: Secure Sockets Layer (SSL) has become one of the most popular security protocols in the Internet. In this paper we present a formal verification of this protocol using the Casper/FDR2 toolbox. In the analysis of SSL v3.0 Handshake we have used a methodology that considers incremental versions of the protocol. We have started with the most basic protocol, and then we have included other features such as server and client authentication, digital signatures, etc. We have also verified SSL v2.0 because of the so called version rollback attack. Each version has been modelled and verified, and the results have been interpreted. Using this methodology it is easy to understand why some messages are needed in order to ensure confidential communication between a client and a server. Title: AUTOMATIC GENERATION OF UML-BASED WEB APPLICATION PROTOTYPES Author(s): Shinpei Ogata and Saeko Matsuura Abstract: The key to success in business system development is to sufficiently elicit user requirements from the customers and to fully and correctly define the requirements analysis model that meets these requirements. Prototyping is recognized as an effective software development method that enables customers to confirm the validity of the requirements analysis model at an early stage of system development. However, the development process requires guaranteeing consistency between the system model and customer requirements that arise as a result of the confirmation. This paper proposes a method for the incremental validation of a Web application wherein a prototype system is automatically generated from a requirements analysis model based on UML (Unified Modeling Language). This model defines the interaction between a system and the user, in addition to defining the input/output data. Moreover, the automatic generation tool of the prototype system enables the developer to confirm the system image incrementally while developing the requirements analysis model in UML. We discuss the expressiveness of the generated prototype in comparison with the current group work support tool. Title: SELECTION CRITERIA FOR SOFTWARE DEVELOPMENT TOOLS FOR SMES - SMES AND COOPERATIVES IN VENEZUELA Author(s): Lornel Rivas, María Pérez, Luis E. Mendoza and Anna Grimán Abstract: Software engineering tools have regained interests in recent years due to different changes affecting software developing organizations. These organizations carry out activities that might be undertaken in a plan driven and agile manner with the support of such tools. A proper balance between both approaches and the effective tool adoption will help organizations to meet their objectives and evolve. Small and medium enterprises and Cooperatives (S&C) share common characteristics throughout Latin America. Small and medium enterprises (SMEs) lack of formality in their roles and relationships among interacting individuals, whereas Cooperatives are usually small companies with weaknesses as to management techniques and technological equipment. In fact, both have difficulties when finding the right personnel and tools that best suit their needs. Considering Venezuela as the subject of our study, we have herein proposed some criteria to assist S&C in the selection of tools that support their development processes while fostering the balance required between agility and discipline. Such criteria were formulated based on the characterization of five factors aimed at determining this balance. These contributions will help subsequently identifying methodological and technical aspects to provide guidance to these organizations in the improvement of their development processes. Title: ONTOLOGICAL ERRORS - INCONSISTENCY, INCOMPLETENESS AND REDUNDANCY Author(s): Muhammad Fahad, Muhammad Abdul Qadir and Muhammad Wajahat Noshairwan Abstract: Mapping and merging of multiple ontologies to produce consistent, coherent and correct merged global ontology is an essential process to enable heterogeneous multi-vendors semantic-based systems to communicate with each other. To generate such a global ontology automatically, the individual ontologies must be free of (all types of) errors. We have observed that the present error classification does not include all the errors. This paper extends the existing error classification and provides a discussion about the consequences of these errors. We highlight the problems that we faced while developing our DKP-OM, ontology merging system and explain how these errors became obstacles in efficient ontology merging process. It integrates the ontological errors and design anomalies for content evaluation of ontologies under one framework. This framework helps ontologists to build semantically correct ontology free from errors that enables effective and automatic ontology mapping and merging with lesser user intervention. Title: REALIZING WEB APPLICATION VULNERABILITY ANALYSIS VIA AVDL Author(s): Ha-Thanh Le and Peter Kok Keong Loh Abstract: Several vulnerability analysis techniques in web-based applications detect and report on different types of vulnerabilities. However, no single technique provides a generic technology-independent handling of web-based vulnerabilities. In this paper we present our experience with and experimental exemplification of using the Application Vulnerability Description Language (AVDL) to realize a unified data model for technology-independent vulnerability analysis of web applications. We also introduce an overview of a new web vulnerability analysis framework. This work is part of a project that is funded by the Centre for Strategic Infocomm Technologies, Ministry of Defence Singapore. Title: PHYSICAL-VIRTUAL CONNECTION IN UBIQUITOUS BUSINESS PROCESSES Author(s): Pau Giner, Manoli Albert and Vicente Pelechano Abstract: Ubiquitous Computing (Ubicomp) when applied to organizations can improve their Business Processes. An example of improvement is the integration of real world objects in the Information System. This reduces inconsistencies in the information and improves information acquisition rates. Automatic identification is the key technology to achieve this paradigm shift. The present work inspects the relevance of the identification concept for Business Processes supported by Ubicomp technologies and presents a conceptual framework to define the elements involved in the identification process. Taking into account that Business Processes tend to be very dynamic, it is important to have technological-independent definitions in order to enable the evolution of systems. The presented framework is independent of the underlying technology, in order not to be locked with a particular solution. Title: IMPROVING ANALYSIS PATTERNS IN THE GEOGRAPHIC DOMAIN USING ONTOLOGICAL META-PROPERTIES Author(s): Evaldo Silva, Jugurta Oliveira and Gabriel Gonçalves Abstract: This article shows the improving of analysis patterns in the geographic domain through ontological meta-properties, where each pattern’s class has its concept analyzed ontologically. This improvement also permits to restructure the class diagram of analysis patterns, increasing the reuse quality. Besides improving the analysis patterns, it is proposed one more topic in the template for analysis patterns documentation. This topic is based on the specification of the main classes defined during the process. Title: A METHOD FOR ENGINEERING A TRUE SERVICE-ORIENTED ARCHITECTURE Author(s): G. Engels, A. Hess, B. Humm, O. Juwig, M. Lohmann, J.-P. Richter, M. Voß and J. Willkomm Abstract: Service oriented architecture (SOA) is currently the most discussed concept for engineering enterprise IT architectures. True SOA is more than web services and web services style of communication. In the first place, it is a paradigm for structuring the business of an enterprise according to services. This allows companies to flexibly adapt to changing market demands. Subsequently, it is a paradigm for structuring the enterprise IT architecture according to those business services. This paper presents a concrete method and rules for engineering an enterprise IT architecture towards a true SOA. It can be seen as an instantiation of roadmaps in enterprise architecture frameworks. Title: A CONCEPTUAL SCHEME FOR COMPOSITIONAL MODEL–CHECKING VERIFICATION OF CRITICAL COMMUNICATING SYSTEMS Author(s): Luis Eduardo Mendoza Morales, Manuel I. Capel Tuñón, María A. Pérez and Kawtar Benghazi Ahklaki Abstract: When we build complex business and communication systems, the question worth to be answered: How can we guarantee that the target system meets its specification? Ensuring the correctness of large systems becomes more complex when we consider that their behaviour is the result of the concurrent execution of many components. This article presents a compositional verification scheme, that integrates MEDISTAM-RT (Spanish acronym of Method for System Design based on Analytic Transformation of Real-Time Models), which is formally supported by state-of-the-art Model-Checking tools. To facilitate and guarantee the verification of large systems, the proposed scheme uses CCTL temporal logic as the property specification formal language, in which temporal properties required to any system execution are specified. In its turn, CSP+T formal language is used to formally describe a model of the system being verified, which is made up of a set of communicating processes detailing specific atomic-tasks of the system. In order to show a practical use of the proposed conceptual scheme, the critical part of a realistic industry project related to mobile phone communication is discussed. Title: TOWARDS REVERSE-ENGINEERING OF UML VIEWS FROM STRUCTURED FORMAL DEVELOPMENTS Author(s): Akram Idani and Bernard Coulette Abstract: Formal methods, such as B, were elaborated in order to ensure a high level of precision and coherence. Their major advantage is that they are based on mathematics, which allow, on the one hand, to neutralize risks of ambiguity and uncertainty, and on the other hand, to guarantee the conformance of a specification and its realization. However, these methods use specific notations and concepts which often generate a weak readability and a difficulty of integration in the development and the certification processes. In order to overcome this shortcoming several research works have proposed to bridge the gap between formal developments and alternate UML models more intuitive and readable. In this paper we are interested by the B method, which is a formal method used to model systems and check their correction by refinements. Existing works which tried to combine UML and B notations don’t deal with the composition aspects of formal models. This limitation upsets their use for large scale specifications, such as those of information systems, because such specifications are often developed by structured modules. This paper improves the state of the art by proposing an evolutive MDA-based framework for reverse-engineering of UML static diagrams from B specifications built by composing abstract machines. Title: MARVIN - MODELING ENVIRONMENTS WITH UBIQUITOUS COMPUTING Author(s): Caio Stein D’Agostini and Patricia Vilain Abstract: When different devices and applications start to work with one another, physical and behavioral aspects from the environment have to be taken into account when designing a system. It is impractical, if not impossible, to list all possible devices, applications and new characteristics and behaviors that might emerge as result from the autonomy of each part from the parts of the system, different data interpretation and self-organization capability. In order to cope with this difficulty, this paper combines the contributions from different works as a way to improve the modeling process of this type of computational environment. Title: EVALUATING CONSISTENCY BETWEEN UML ACTIVITY AND SEQUENCE MODELS Author(s): Yoshiyuki Shinkawa Abstract: UML activity diagrams and sequence diagrams describe the behavior of a target domain or a system from different viewpoints. When we use these diagrams for modeling the same matter in an application, these diagrams, or the models written by them, must be consistent from each other. However, the evaluation for the consistency is difficult, since these diagrams have considerably different syntax and semantics. This paper presents a process algebraic approach to evaluating the consistency between these models. CCS (Communicating Sequential Processes) is used as process algebra. Title: EXPRESSING BUSINESS RULES IN BPMN Author(s): Li Peng and Hang Li Abstract: The Business Process Modelling Notation is a graph-oriented language for executable business processes. Business rules describe fundamental constraints on a system’s transactions. Changes of business rules bring high impact on business processes. It is important for these rules to be represented explicitly, and to be automatically applicable. However, business rules are expressed in a markedly different way than business processes, it can be challenging to integrate them in a model. In this paper, I propose an approach for expressing business rules in BPMN. The key idea is that business rules are operationalized by BPMN subprocesses. This method can improve requirements traceability in process design, as well as minimize the efforts of system changes due to the changes of business rules. Title: MODELLING AND DISTRIBUTING INDIVIDUAL AND COLLECTIVE ACTIVITIES IN A WORKFLOW SYSTEM Author(s): Saïda Boukhedouma and Zaia Alimazighi Abstract: This paper deals with the modelling and the dispatching of individual and collective work in a workflow system. It particularly, focuses on workflow processes of organizations where actors are organized by teams. In such a system, participants must interact with each other, coordinate, exchange informations, cooperate and synchronize their work. To do that, it is essential to define firstly, the way of dispatching individual and collective parts of work to all participants in the system. We describe some static and dynamic aspects of the system and we propose a simple algorithm for managing human resources at the runtime of the process. The algorithm aims to balance the workload between human actors in the system, it is based on a workload cretaria computed according to the activities durations. Title: BUSINESS DRIVEN RISK ASSESSMENT - A METHODICAL APPROACH TO RISK ASSESSMENT AND BUSINESS INVESTMENT Author(s): David W. Enström and Siavosh Hossendoust Abstract: Dynamic business environments require concurrent, distributed, and flexible architectures that must provide an agreeable level of reliability and acceptable level of trust. Increasing solution architecture reliability thorough reviews and analysis of business requirements, architecture and design, resources and products is quickly replacing the prototype and test design model. A three level undisruptive business driven planning process has been formulated using a risk analysis model that provides a justifiable direction for implementing a low risk solution and selecting appropriate products. The methodology includes identification of “Risk Priority” through assessment of risks for: Business Effectiveness, Logical IT Solution Architecture (PIM) Aspects and Physical IT Solution Architecture (PSM) Aspects. It also introduces the Risk Dependency Analysis process as an aid in understanding relationships between architectural layers. This proposed methodology aids in understanding and prioritizing risks within the context of the organization; it has broadened the concept of a TRA into a risk controlled solution architecture domain. Title: STUDY ON IP TRACEBACK SYSTEM FOR DDOS Author(s): Cheol-Joo Chae, Bong-Han Kim and Jae-Kwang Lee Abstract: The rapid growth in technology has caused misuse of the Internet like cyber Crime. There are several vulnerabilities in current firewall and Intrusion Detection Systems (IDS) of the Network Computing resources. Automatic real time station chase techniques can track the internet invader and reduce the probability of hacking Due to the recent trends the station chase technique has become inevitable. In this paper, we design and implement IP traceback system. In this design no need to modify the router structure and we can deploy this technique in larger network. Our Implementation shows that IP traceback system is safe to deploy and protect data in Internet from hackers and others. Title: FROM PROCESS TO SOFTWARE SYSTEMS' SERVICE - USING A LAYERED MODEL TO CONNECT TECHNICAL AND PROCESS-RELATED VIEWS Author(s): Christian Prpitsch Abstract: Business processes are often defined by people not familiar with technical details of software systems. There are several languages how to define either a business process or technical details but no language supports both. We present a solution to build a bridge between both disjunct areas by using a level of abstraction between them. Along we create highly reusable artefacts. Both parties will use the same model but with a different perspective. In the paper we use a short scenario to show disadvantages of existing technologies, requirements and benefit. The scenario is derived from e-learning but can be transfered to any collaborative process where people create something. Title: A MODEL DRIVEN ARCHITECTURE TOOL BASED ON SEMANTIC ANALYSIS METHOD Author(s): Thiago Medeiros dos Santos, Rodrigo Bonacin, M. Cecilia C. Baranauskas and Marcos Antônio Rodrigues Abstract: Literature in Information Systems presents a set of approaches to analyzing and modeling the organizational context in order to improve the quality of the computational systems. The research community also looks for alternatives for aligning these approaches with Software Engineering techniques, which support the large scale and low cost software production and maintenance. In this work we propose an approach and a tool to support the construction of computational models taken from the organizational models. This approach is based on concepts, methods and techniques from MDA (Model Driven Architecture), and from Organizational Semiotics, more specifically the MEASUR (Methods for Eliciting, Analyzing and Specifying User’s Requirements). A MDA tool was constructed in order to realize the approach. The first version of this tool provides means to model Ontology Charts. By using semi-automated transformations, it is also possible to produce UML (Unified Modeling Language) class diagrams from the modeled Ontology Charts. The tool features, design solutions, and capabilities are explained. Title: AN ANALYSIS PATTERN FOR MOBILE GEOGRAPHIC INFORMATION SYSTEMS TOWARD MUNICIPAL URBAN ADMINISTRATION Author(s): Bruno Rabello Monteiro, Jugurta Lisboa Filho, José Luís Braga and Waister Silva Martins Abstract: This paper introduces an analysis pattern for Mobile Geographic Information Systems (Mobile GIS) focused on urban administration applications. This pattern provides a class and associations diagram and can be used in the development of an urban Mobile GIS application. The paper also describes a process that guided us in obtaining this analysis pattern, presenting an example of its use in the conceptual modeling of an actual application. Title: DO SME NEED ONTOLOGIES? - RESULTS FROM A SURVEY AMONG SMALL AND MEDIUM-SIZED ENTERPRISES Author(s): Annika Öhgren and Kurt Sandkuhl Abstract: During the last years, an increasing number of successful cases for using ontologies in industrial application scenarios have been reported, the majority of these cases stem from large enterprises. The intention of this paper is to contribute to an understanding of potentials and limits of ontology-based solutions in small and medium-sized enterprises (SME). The focus is on identifying application areas for ontologies, which motivate the development of specialised ontology construction methods. The paper is based on results from a survey performed among 113 SME in Sweden, most of them from manufacturing industries. The results of the survey indicate a need from SME in three application areas: (1) management of product configuration and variability, (2) information searching and retrieval, and (3) management project documents. Title: SEMIOTICS, MODELS AND COMPUTING Author(s): Bertil Ekdahl Abstract: Recently, semiotics has begun to be related to computing. Since semiotics is about the interpretation of signs, to which language is a chief part, such an interest may seem quite reasonable. The semiotic approach is supposed to bring semantics to the computer. In this paper I discuss the realistic in this from the point of view of computers as linguistic systems, that is, as interpreters of descriptions (programs). I maintain the holistic view of language in which the parts are a whole and cannot be detached. This has the implication that computers cannot be semiotic systems since the necessary interpretation part cannot be made part of the program. From outside, a computer program can very well be considered semiotically since the equivalence between computers and formal system implies that there is a well defined model (interpretation) that has to be communicated. Title: STRUCTURAL MODEL OF REAL-TIME DATABASES Author(s): Nizar Idoudi, Claude Duvallet, Bruno Sadeg, Rafik Bouaziz and Faiez Gargouri Abstract: A real-time database is a database in which both the data and the operations upon the data may have timing constraints. The design of this kind of database requires the introduction of new concepts to modelize both data structures and the dynamic behavior of the database. In this paper, we propose an UML2.O profile, entitled UML-RTDB, allowing the design of structural model for a real-time database. One of the main advantages of UML-RTDB is its capacity to take into account real-time database properties through specialized concepts in rigourous, easy and expressive manner. Title: B2B AUTOMATIC TAXONOMY CONSTRUCTION Author(s): Ivan Bedini, Benjamin Nguyen and Georges Gardarin Abstract: The B2B domain has already been subject to several research experiences, but we believe that the real advantage of introducing semantic technologies within enterprise application integration has not yet been investigated fully. In this paper we provide a new use case for the next generation Semantic Web applications with regards to enterprise application integration. We also present the results of our experience in automatically generating a taxonomy from numerous B2B standards, constructed using Janus, a software tool we have developed in order to extract semantic information from XML Schema corpora. The main contribution of this paper is the presentation of the results of our tool. Title: APPLYING ACTIVITY PATTERNS FOR DEVELOPING AN INTELLIGENT PROCESS MODELING TOOL Author(s): Lucinéia Heloisa Thom, Manfred Reichert, Carolina Ming Chiao and Cirano Iochpe Abstract: Due to their high level of abstraction and their reusability, workflow patterns are increasingly attracting the interest of both BPM researchers and BPM tool vendors. Frequently, process models can be assembled out of a set of recurrent business functions (e.g., task execution request, approval, notification), each of them having generic semantics that can be described as activity pattern. To our best knowledge, so far, there has been no extensive work implementing such activity patterns in a process modeling tool. In this paper we present an approach for modeling business processes and workflows. It is based on a suite which, when being implemented in a process modeling tool, allows to design business processes based on well-defined (process) activity patterns. Our suite further provides support for analysing and verifying certain properties of the composed process models (e.g., absence of deadlocks and livelocks). Finally, our approach considers both business processes designed from scratch and processes extracted from legacy systems. Title: IT GOVERNANCE FRAMEWORKS AS METHODS Author(s): Matthias Goeken and Stefanie Alter Abstract: Actually there is little academic support for the challenges of IT management. As a reaction, various best practice frameworks were developed, which can be subsumed under the topic IT governance. These still has no sound basis or scientific foundation. To undertake a step in this direction, we present a method metamodel of COBIT, the popular IT governance framework. A major goal of this is to represent the underlying logical and semantically rich structure of this framework. This turns out to be fruitful for comparing and integrating different frameworks. Furthermore, frameworks can be checked for completeness and can be integrated on this basis. An interesting application is the representation of IT governance frameworks in a tool, which is demonstrated in the paper as well. Title: FROM INTERORGANIZATIONAL MODELLING TO ESSENTIAL USE CASES Author(s): Luciana C. Ballejos and Jorge M. Montagna Abstract: An information system analysis cannot be independent from the social and organizational context where it will be implemented. Requirements Engineering (RE) must initially focus on the modelling of the information system domain rather than on its functionalities. Thus, diverse techniques exist in order to represent the properties of organizational environments. However, they do not consider many attributes that are characteristic of contexts where two or more organizations interact, named interorganizational ones. This article proposes the extension of i* modelling framework (Yu, 1995) for interorganizational environments, as an important step towards analysis and design of interorganizational information systems (IOSs). The extensions (new elements and a new model) are then used to derive essential use case descriptions, thus demonstrating their usefulness at IOSs RE stage. Diverse transformation rules are explained and an example is instantiated to show them clearly. Title: RELATING SYSTEM DYNAMICS AND MULTIDIMENSIONAL DATA MODELS - A METAMODEL BASED LANGUAGE MAPPING Author(s): Lars Burmester and Matthias Goeken Abstract: System Dynamics (SD) is an approach with a long tradition used for modelling and simulation of complex system. Early, a conceptual modelling language was applied to bridge the “linguistic gap” between the natural language of the model users and the targeted simulation language. Despite the maturity of the modelling approach, up to today no formal linguistic definitions (linguistic metamodel) exists for the used languages, resulting in non complying language extensions and the lack of reasonable combination with other modelling languages, e.g. for use in Business Intelligence systems. This paper aims at the development of a formal linguistic definition of the language used in SD modelling in terms of a linguistic metamodel. Further, the metamodel based combination of modelling languages is demonstrated by relating the SD language to multidimensional data modelling. Finally, the therefore necessary extensions of the SD language will be performed via extension of the linguistic metamodels. Title: A SERVICE-ORIENTED FRAMEWORK FOR MAS MODELING Author(s): Wautelet Yves, Achbany Youssef and Kolp Manuel Abstract: This paper introduces an analysis framework made of different models proposing complementary views. First view - the strategic services diagram (SSD) - is the most aggregate knowledge level; it uses services for representing the system to be in a global manner. Second view offers a more detailed perspective of the agents involved into the services using traditional i* Strategic Dependency (SDD) and Strategic Rationale Diagrams (SRD). Finally, the third view - the Dynamic Services Hypergraph (DSH) - offers a dynamic approach of services realization paths at various Qualities of Service (QoS). Using those models at analysis level is interesting in a model driven software development approach. Indeed, the project stakeholders can share a common vision through a lookup at the services the system has to offer, at the depender and dependee agents for the services constituents and at the different realization paths and their QoS. The framework also offers the fundamental elements to develop a complete model driven software project management framework. It can be considered as a startup for a global broader software engineering method. Title: USING VARIANTS IN KAOS GOAL MODELLING Author(s): Joël Brunet, Farida Semmak, Régine Laleau and Christophe Gnaho Abstract: Abstract. In this paper we apply a certain kind of variability in KAOS goal and responsibility models, that allow to condition some links in them depending on the choice or non-choice of some variants of the system. These variants are grouped in facets and organized in a variant tree that has a semantics of what we called a metagraph, whose instances are all the goal/responsibility graphs generated when all variants are fixed. We use a case study from the land transportation domain: a simplified cycab (or cybercar) with a few variants. The overall case study is part of the ANR TACOS project, whose final aim is to define a component-based approach to specify systems with high safety requirements, based on this domain that is the subject of an intensive research. Title: IMPROVING THE UNDERSTANDABILITY OF I* MODELS Author(s): Fernanda Alencar, Carla Silva, Márcia Lucena, Jaelson Castro, Emanuel Santos and Ricardo Ramos Abstract: Requirements engineering (RE) has been considered a key activity in almost all software engineering process. i* is a goal-oriented approach widely adopted in the earlier phases of RE, as it offers a modeling language that describes the system and its environment in terms of actors and dependencies among them. However, often the models become cluttered even for small applications, compromising their understanding, evolution and scalability. In large and complex applications, this problem increases significantly. In this paper we investigate the use of structuring mechanisms to deal with the complexity which may arise when i* is used to model complex domains. Title: BUSINESS PROCESS MODELLING THROUGH EQUIVALENCE OF ACTIVITY PROPERTIES Author(s): Carla Marques Pereira and Pedro Sousa Abstract: Since two decades ago, business processes have been gaining attention from IS’ managers, consultants and researchers and are becoming a key element in driving IT innovation. Despite such prominence, the concept of business process is not clear enough and even more unclear is the way to depict organization activities into a process blueprint. We claim that a much more precise and consolidate concept for “business process” is required. In this paper, we explain the constraints that influence business process modelling and we propose a solution based on a multi dimensional representation that decomposes processes into the Zachman Framework dimensions. Upon our solution we build up the concept of process equivalence. Title: ECM SYSTEMS ANALYSIS AND SPECIFICATION - TOWARDS A FRAMEWORK FOR BUSINESS PROCESS RE-ENGINEERING Author(s): Jan vom Brocke, Alexander Simons and Anne Cleven Abstract: Information Systems (IS) analysis and specification is of great importance for successfully implementing IS. A just emerging field in IS research is Enterprise Content Management (ECM). In today’s work life, content becomes more and more a critical success factor and strategic asset. ECM provides the potentials to significantly influence both business process efficiency and effectiveness. However, there is still uncertainty on how to realize these potentials. Latest approaches in the field of ECM rarely provide methodical support for ECM systems analysis and specification. Moreover, the necessity of re-engineering business processes when implementing ECM is far too often neglected. With this paper, we introduce a framework for ECM adoption and business process re-engineering my making use of established methods in the field of Business Process Management (BPM), like business process specification and analysis. Title: QUALITY IMPROVEMENT OF WORKFLOW DIAGRAMS BASED ON PASSBACK FLOW CONSISTENCY Author(s): Osamu Takaki, Takahiro Seino, Izumi Takeuti, Noriaki Izumi and Koichi Takahashi Abstract: Passback flows are a kind of flows which appear as replays of operations in a workflow diagram. In order to consider consistency of a workflow diagram and to verify the consistency, it needs to deal with passback flows as special ones. In this paper, we formalize a passback flow in a workflow diagram with only graph theoretical properties of the workflow diagram, and we give an algorithm detecting and removing all passback flows in a workflow diagram. Furthermore, with the algorithm, we extend consistency properties of the structure and life cycles of evidences over an acyclic workflow diagram into those over a general workflow diagram. Our methodology enables us to improve quality of a workflow diagram with loops. Title: ONTOLOGY BASED SEMANTIC REPRESENTATION OF THE REPORTS AND RESULTS IN A HOSPITAL INFORMATION SYSTEM Author(s): B. Prados-Suárez, E. González Revuelta, C. Peña Yáñez, G. Carmona Martínez and C. Molina Fernández Abstract: The main purpose of this paper is to contextualize the access to the huge amount of results and reports that a Hospital Information System (HIS) can reach to have. Our target is to integrate a semantic layer with the HIS, so that the user can employ this layer to access just the precise information needed, under his working context. Here we propose a new navigation system based on the semantic characteristics of the data acceded, their complementary characteristics, properties and relations, providing so the HIS with a new tool to solve an evident problem for the users. Title: CASE ON MODELING OF MANUFACTURING SYSTEMS BY MODIFIED IDEF0 TECHNIQUE Author(s): Vladimír Modrák Abstract: The paper is concerned with a process management from the point of view of business process mapping. It is focused on methodological aspects of business process modeling leading to development map of processes with consistent linkages between all hierarchical levels. Used approach is directed at support of managing processes that flow across departments and/or functions within the organization. Developed process mapping technique is based on process decomposition that is resulting in a set of business structure models, which are represented by diagrams. Title: A QVT-BASED APPROACH FOR MODEL COMPOSITION - APPLICATION TO THE VUML PROFILE Author(s): Adil Anwar, Sophie Ebersold, Mahmoud Nassar, Bernard Coulette and Abdelaziz Kriouile Abstract: With the increasing importance of models in software development, many activities such as transformation, verification and composition are becoming crucial in the field of Model Driven Engineering (MDE). Our main objective is to propose a model-driven approach to compose design models. This approach is applied to the VUML profile that allows us to analyse/design a system on the basis of functional points of view. In this paper we first describe a transformation-based composition process and then we specify transformations as a collection of QVT-Core rules. The proposal is illustrated by a simple example. Title: INTEGRATING TECHNICAL APPROACHES, ORGANISATIONAL ISSUES, AND HUMAN FACTORS IN SECURITY RISK ASSESSMENT BY ORGANISING SECURITY RELATED QUESTIONS Author(s): Lili Yang, Malcolm King and Shuang Hua Yang Abstract: This paper aims to develop a multiple perspective framework for employee security risk assessment by simultaneously addressing three distinct perspectives: a technical, an organisational, and a human perspective. This integrated approach is totally different from using a technical perspective for the technical issues, an organisational perspective for the organisational issues, and a human perspective for the human issues, which are not adequate when carrying out security risk assessment. This article has developed a security related question library that integrates organisational culture and human factors with network security risk assessment in a ISO/IEC 27001 compliant environment. The case study in our research illustrates that any perspective may illuminate any issue, the different perspectives are mutually supportive, not mutually exclusive, and every information security issue, regardless of its nature, should be addressed by using these three perspectives. Title: FACILITATING AND BENEFITING FROM END-USER INVOLVEMENT DURING REQURIEMENTS ANALYSIS Author(s): Jose Luis de la Vara and Juan Sánchez Abstract: Although requirements analysis is a critical success factor of information system development for organizations, problems related to the requirements stage are frequent. System analysts usually lack understanding of the business and focus on the purpose of the system, and can easily miscommunicate with end-users. To prevent these problems, this paper describes an approach that tries to facilitate and benefit from end-user involvement during requirements analysis. The approach is based on BPMN and the goal/strategy Map approach. The business environment is modelled in the form of business process diagrams. The diagrams are validated by end-users, and the purpose of the system is then analyzed in order to agree on the effect that the system should have on the business processes. Finally, requirements are specified by means of the description of the business process tasks to be supported by the system. Title: PROGNOSTIC CAPACITY MANAGEMENT FROM AN IT SERVICE MANAGEMENT PERSPECTIVE Author(s): Thomas Jirku and Peter Reichl Abstract: The biggest challenge faced by most IT organizations is to find the perfect balance between continuously running the IT infrastructure, at the same time enhancing the quality of their services, and responding with increasing agility to changing business needs, all of this under the additional constraint of decreasing budgets. The resulting pressure has become a main driver towards process are-engineering and ITIL. Among ITIL's best practice recommendations, capacity management is a key one. In this paper we discuss how to turn IT capacity management from pure outage prevention into a proactive mode which allows to better align IT budgets with business needs, to plan investments ahead, and to increase service quality by a more efficient resource usage. After a brief overview on ITIL basics including properties of a platform-independent central database, we discuss fundamental performance and workload concepts and present a case study demonstrating how to perform a holistic performance analysis of IT services. Title: INSCO REQUISITE - A WEB-BASED RM-TOOL TO SUPPORT HYBRID SOFTWARE DEVELOPMENT Author(s): F. J. Orellana, J. Cañadas, I. M. Del Águila and S. Túnez Abstract: This paper presents InSCo Requisite, a Web-based Requirements Management Tool (RM-tool) supporting InSCo, a methodology based on CommonKADS and RUP, for developing software systems in which traditional information systems coexists with knowledge-based components. A revision of similar requirement management tools is presented, as well as a review of the main requirements proposed for this kind of tools. Title: IMPROVEMENT OF A WEB ENGINEERING METHOD APPLYING SITUATIONAL METHOD ENGINEERING Author(s): Kevin Vlaanderen, Francisco Valverde and Oscar Pastor Abstract: In recent years, the Web Engineering community has introduced several model-driven methods in order to simplify Web Application development. However, these methods are too general and mainly focus on data intensive Web Applications. A solution to this problem is the Situational Method Engineering. This approach allows the creation or improvement of a web engineering method by reusing method fragments from previous methods. This way, a method suitable for a concrete project or domain can be designed. In this work, the OOWS method metamodel is defined with the purpose of applying Situational Method Engineering. Because of this metamodel, OOWS method fragments can be formalised and used to improve the efficiency of another Web Engineering Methods. Furthermore, the suitability of the OOWS method in the context of CMS-based web applications is evaluated through a user-registration case study. The results of this evaluation, is a list of current limitations of the OOWS Method in the CMS Web Systems domain and possible solutions. Title: BP-FAMA : BUSINESS PROCESS FRAMEWORK FOR AGILITY OF MODELLING AND ANALYSIS Author(s): Mohamed Boukhebouze, Youssef Amghar and Aïcha-Nabila Benharkat Abstract: In this paper we present the BP-FAMA framework (Business Process Framework for Agility of Modelling and Analysis), motivated by the need to have an abstraction level in which algorithmic preoccupation and the management rules are separated for a better evolution of processes. Nowadays the approaches proposed are based on a mixing of business rules and algorithmic structures making the process difficult to change. The other motivation is the improvement of the quality of the translation of process specification to high-level executable process. These objectives don’t go without the need for new agile modelling languages: BPAMN (Business Process Agile Modelling Notation) and BPADL (Business Process Elements Description Language) and a new agile execution language: BPAEL (Business Process Agile Execution Language). Title: LEGACY SYSTEMS ADAPTATION USING THE SERVICE ORIENTED APPROACH Author(s): Francisco Javier Nieto, Iñigo Cañadas and Leire Bastida Abstract: Legacy systems are the core IT assets of the great majority of organisations that support their critical business processes. Integrating those existing legacy systems with the rest of IT infrastructure is a complex and difficult task. Legacy systems are often undocumented, inflexible and tightly coupled and imply high cost of maintenance. Many organisations are starting to look at Service Oriented Architectures (SOA) as a potential way to expose their existing legacy investment as functional units to be re-used and exploited externally. This paper is focused on providing guidance to those organisations which want to use SOA on legacy adaptation and transformation. For doing so, this paper defines a vision and a set of best practices that any organisation should follow in order to expose their useful legacy functionalities as part of a SOA environment, allowing the development of hybrid systems understood as compositions of new services as well as of legacy systems and existing components wrapped as services. Title: NORM ANALYSIS SUPPORTING THE DESIGN OF CONTEXT-AWARE APPLICATIONS Author(s): Boris Shishkov, Marten van Sinderen, Kecheng Liu and Hui Du Abstract: In this paper we explore the usefulness of elaborating process models with norms, especially focusing on NAM (Norm Analysis Method) as an elaboration tool that can be combined with a process modeling tool, such as PN (Petri Net). The PN-NAM combination has been particularly considered in relation to a challenge that concerns the design of context-aware applications, namely the challenge of specifying and elaborating complex behaviors that may include alternative (context-driven) processes (we assume that a user context space can be defined and that each context state within this space corresponds to an alternative application service behavior). Hence, the main contribution of the current paper comprises an adaptability-driven methodological and modeling support to the design of context-aware applications; modeling guidelines are proposed, reflected through modeling tools (in particular PN and NAM), and partially illustrated by means of a small example Title: RICAD: TOWARDS AN ARCHITECTURE FOR RECOGNIZING AUTHOR'S TARGETS Author(s): Kanso Hassan, Elhore Ali, Soulé-Dupuy Chantal and Tazi Said Abstract: The growing volume of electronic documents on the Web, make their retrieval difficult. One of the reasons of this difficulty is the lack of rich representation of their structures. Intentional Structures promise to be a new paradigm to extend the existing documents structures and to enhance the different phases of documents process such as creation, editing, search and retrieval. The objective of this work is to propose a model of intentional structure and, an analyzer to recognize the author’s intentions from written documents in a specific domain. On the one hand, this system is based, on an ontology of intentions. On the other hand, it is based on a set of algorithms which facilitate the building of intentional structure. The main principle of these algorithms is to reproduce writers’ skills. The role of the system is to recognize the intentional structure i.e., to make a segmentation in a semi-automatic way of a document according to the authors intentions, and to extract the intentional verbs accompanied by their concepts of each segment through the algorithms of our analyzer. This analyzer is also able to update the ontology of intentions for the enrichment of the knowledge base containing all possible intentions of a domain. This article presents an experimentation on scientific publications in the field of computer science. Title: TOWARDS A VALUE-ORIENTED APPROACH TO BUSINESS PROCESS MODELLING Author(s): Jan vom Brocke, Jan Mendling and Jan Recker Abstract: To date, typical process modelling approaches put a strong emphasis on behavioural aspects of business operations. However, they often neglect value-related information. Yet, such information is of key importance to strategic decision-making, for instance in the context of process re-engineering. In this paper we propose a value-oriented approach to business process modelling that facilitates managerial decision-making in the context of process re-design. The contribution of the paper is two-fold: First, we synthesise a number of key concepts from operations and financial management and show how they can be integrated with current business process modelling techniques. Second, we present a structured approach for evaluating the value of process design alternatives, so that managers can be guided towards process innovations that add real business value. Title: TESTING-BASED COMPONENT ASSESSMENT FOR SUBSTITUTABILITY Author(s): Andres Flores and Macario Polo Usaola Abstract: Updating systems assembled from components demand a careful treatment due to stability risks. Replacement components must be properly evaluated to identify the required similar behaviour. Our proposal complements the regular compatibility analysis with black-box testing criteria to reinforce reliability. The aim is to analyze functions of data transformation encapsulated on components, i.e. their behaviour. This complies with the observability testing metric. A Component Behaviour Test Suite is built concerning the integration level to be later applied on candidate upgrades. The approach is supported through a tool developed on our group, testooj, which is focused on Java components. Title: TOWARDS A DATA INTEGRATION APPROACH BASED ON BUSINESS PROCESS MODELS AND DOMAIN ONTOLOGIES Author(s): Fernanda Baião, Flavia Santoro, Hadeliane Iendrike, Claudia Cappelli, Mauro Lopes, Vanessa T. Nunes and Ana Paula Dumont Abstract: Building integrated technological solutions in an organization without non-desired information redundancy within several databases is still a challenge for the Information Systems research area. Domain Ontologies are considered solutions, since they allow people and software agents to share common agreement about information and semantics on a specific domain of knowledge. However, for this integration to be carried out effectively, the ontology should be kept up-to-date according to concept definitions, current business rules, and all information that is manipulated throughout the organization. This may be very difficult to achieve, especially when considering dynamic organizations. In this paper we present an approach for developing domain ontology from business process models. Business process models are concrete artifacts that may be described in both graphical and textual representational languages, and aggregates different views of knowledge in a domain of discourse. In one of those aggregated views, business process models may explicitly identify which information is manipulated by each activity in the sequence of activities composing the process, and the business rules. We argue that the association of information and activities minimizes bad interpretation of concepts risk, thus helping in building integrated data models. Title: A WEB 2.0 CASE TOOL SUPPORTING PACKAGED SOFTWARE IMPLEMENTATION Author(s): Harris Wu and Georges Arnaout Abstract: Companies increasingly rely on implementing packaged software instead of developing custom solutions. Packaged software implementation (PSI) is the process to solve business problems by customizing and integrating an off-the-shelf software package. However, there has been a lack of CASE (Computer Aided Software Engineering) tools to support PSI. This paper presents a Web 2.0 based tool that supports case-based reasoning in PSI. The tool helps users explore past design cases, find a similar case, and reuse the design for that case in new problem situations. Our belief is that by utilizing the social power of a large group of users, better designs can be achieved at lower risks and lower costs. Title: AN OVERVIEW OF SOFTWARE PROCESS QUALITY INSTRUMENTS ADOPTION AT BRAZIL Author(s): Rodrigo Santos de Espindola, Edimara Mezzomo Luciano and Jorge Luis Nicolas Audy Abstract: This paper presents results from a quantitative research that was conducted among 260 participants of a software quality event in Brazil. This research aims the understanding of the spread of quality instruments in this context and the occurrence of several IT problems in the analyzed organizations. The main contribution is the identification of the respondent’s perception about the relationship of the quality instruments adoption and their impact on several IT problems throughout statistical analysis of the collected data. Title: EVOLUTION OF INFORMATION SYSTEMS WITH DATA HIERARCHIES Author(s): Bogdan Denny Czejdo Abstract: Research and practical efforts have intensified recently in the area of Information Systems (IS) supporting data and application evolution. The need to support IS evolution is caused by a variety of reasons including dynamicity of data sources, changing processing requirements, and using new technologies. In this paper we concentrate on evolution of IS data repositories caused by dynamicity of data sources. Our approach is to capture changes of various data hierarchies and use them as rules to implement evolution of IS data repository. Evolution of hierarchies can be categorized into: hierarchy creation, deletion, and modification for both schema hierarchy and instance hierarchy. Title: TOWARDS CREATION OF LOGICAL FRAMEWORK FOR EVENT-DRIVEN INFORMATION SYSTEMS Author(s): Darko Anicic and Nenead Stojanovic Abstract: Information Systems. Such systems are enabled to actively respond on events or state changes. Hence their behavior is programable by means of ECA rules. We propose an implementation of ECA rules in a completely logical framework, using Transaction Datalog as an underlying logic. In this way, we extend the current ECA framework by means of powerful and declarative semantics, which also have an appropriate procedural interpretation. We show how a logical calculus of Transaction Datalog can be exploited for realizing composite events, conditions, and actions; justifying the use of declarative semantics for solving some of the existing issues in reactive systems Title: A RECONFIGURABLE SECURITY MANAGEMENT SYSTEM WITH SERVICE ORIENTED ARCHITECTURE Author(s): Ing-Yi Chen and Chao-Chi Huang Abstract: This paper proposes an Information Security Management System (ISMS), which is essentially a service-oriented mechanism. The system supports distributed process changes in run-time and as needed. In addition, this study uses a Service-Oriented Software Engineering (SOSE) development methodology for designing and building a Service Oriented Architecture (SOA) ISMS. These features represent significant improvements upon existing systems because, they allow for a dramatic increase in the ease and efficiency with which system modification can occur. They also allow for the reuse of existing services and their recombination, either with other existing services, or with new software, in order to create new processes. The features afforded by this system hold tremendous potential for use within a range of industries and organizations, especially those that seek to provide on-line services to their customers or product users. Title: A SUPPLY CHAIN ONTOLOGY CONCEPTUALIZATION WITH FOCUS ON PERFORMANCE EVALUATION Author(s): Alicia C. Böhm, Horacio P. Leone and Gabriela P. Henning Abstract: Organizations all over the world are increasingly aligning in Supply Chains (SCs) in order to perform more efficiently and to achieve better results. This contribution presents a SC ontology that aims at conceptualizing and formalizing this domain knowledge. Its goal is to: (1) enable a better understanding among the various stakeholders, (2) set the basis for effective information sharing and the development of integrations tools. The ontology introduces concepts associated with the SC structure, functions, resources, as well as management issues. Since one key component of management is performance evaluation, which must be carried out along the SC, the proposed ontology focuses on performance evaluation aspects. Title: METHODOLOGICAL EXTENSIONS FOR SEMANTIC BUSINESS PROCESS MODELING Author(s): David de Francisco, Ivan Markovic, Javier Martínez, Henar Muñoz and Noelia Pérez Abstract: Semantic Business Process Management (SBPM) aims, among other advantages, to facilitate the reuse of knowledge related to the definition of business processes, as well as to provide an easier transition from models to executable processes by applying semantic technologies to BPM. In this article the authors focus on extending the scope of knowledge being reused from executable processes to semantic models' scope. This is done by providing some methodological extensions to existing BPM methodologies in order to take advantage of semantic technologies during business process modeling. Title: IT SERVICE MANAGEMENT OF USING H HETEROGENEOUS, DYNAMICALLY ALTERABLE CONFIGURATION ITEM LIFECYCLES Author(s): David Loewenstern and Larisa Shwartz Abstract: IT service management requires the management of heterogeneous artifacts such as configuration items with differing lifecycles and complex interrelationships. Lifecycle management in such environments is critical in providing feedback and control between processes and sustaining communication and synchronization between and within processes while maintaining the integrity of the configuration item database (CMDB). We propose a method for managing lifecycles of multiple types, with automatedmethods for inheritance of lifecycles, authorization of state changes, propagation of state changes through relationships, and dynamic lifecycle updates. Title: KNOWLEDGE REPRESENTATION AND COST MANAGEMENT FOR SUPPLY CHAINS Author(s): K. Donald Tham Abstract: The intent of this research is to make a contribution to SCEM and SCM systems. Severe competition for goods and services and the demanding requirements of customers have given rise to ever expanding global supply chains. Recognizing the growing significance of supply chains, we first present a means towards achieving knowledge representation (KR) of supply chains with enterprise modelling. Second, to promote effective, efficient and timely cost management throughout the supply chain, we present Temporal-ABC™ (registered trademark of Nulogy Corporation) together with a formalization of the costs of resources (i.e., resource cost units), and the ever increasing overhead costs incurred by enterprises. A means towards consistent, unambiguous, traceable and more accurate costing of each activity throughout the supply chain would go a long way towards establishing its success. Title: TOWARDS A SEMI-AUTOMATIC TRANSFORMATION PROCESS IN MDA - ARCHITECTURE AND METHODOLOGY Author(s): Slimane Hammoudi, Wajih Alouini and Denivaldo Lopes Abstract: Recently, Model Driven Engineering (MDE) approaches have been proposed for supporting the development, maintenance and evolution of software systems. Model driven architecture (MDA) from OMG (Object Management Group), “Software Factories” from Microsoft and the Eclipse Modelling Framework (EMF) from IBM are among the most representative MDE approaches. Nowadays, it is well recognized that model transformations are at the heart of these approaches and represent as a consequence one of the most important operations in MDE. However, despite the multitude of model transformation languages proposals emerging from university and industry, these transformations are often created manually. In this paper we propose in the first part an extended architecture that aims to semi-automate the process of transformation in the context of MDA. This architecture introduces mapping and matching as first class entities in the transformation process, represented by models and metamodels. In the second part, our architecture is enforced by a methodology which details the different steps leading to a semi-automatic transformation process. Finally, a classification of these different steps according to two main criteria is presented: how the steps are achieved (manual/automatic), and who is responsible for their achievement (expert, designer or software Title: GRAPHICAL REPRESENTATIONS OF MESSAGE EXCHANGES INTO WEB SERVICE-BASED APPLICATIONS Author(s): René Pegoraro, João Maurício Rosário and Khalil Drira Abstract: Web service-based application is an architectural style, where a collection of Web services communicates each to other to execute processes. With the popularity increase of Web service-based applications and since messages exchanged inside of this applications can be very complexes, we need tools to simplify the understanding of interrelationship among Web services. This work present a description of graphical representation of Web service-based applications and the mechanisms inserted between Web services requester and providers to catch information to represent a process. The major contribution of this paper is discussing and using HTTP and SOAP information to show a graphical representation similar a UML sequence diagram of SOA Web service-based applications. Title: MEASURING THE E-PARTICIPATION IN DECISION-MAKING PROCESSES THROUGH ONLINE SURVEYS Author(s): Cristiano Maciel, Licinio Roque and Ana Cristina Garcia Abstract: The deliberative decision-making process of the group can be a result of counseling and voting mediated by technology. The involvement of citizens in this process is crucial and measuring participation in this process allows for assessment of the effectiveness of participation. Measuring the maturity of this decision, i.e. assessment of individual participation and its consequent reflection on the group’s decision can be accomplished through the maturity level method discussed in this paper. To accomplish an examination of the relative potential and difficulties in achieving and measuring e-participation we found it necessary to have a reasonable level of information structuring. For this purpose, online surveys were built and tested in stages, structured according to the Government-Citizen Interactive Model in a way as to support the Degree of Maturity Method (DMM). In the general scope of this research two distinct proposals are verified for the realization of a structured deliberative process, with consulting and voting, wishing to measure the maturity of the decisions taken through the application of online surveys by stages, and the use of a web application. The use of surveys is the focus of this paper, tested in an case study that allows verify the DMM. Title: A FAHP-BASED TECHNOLOGY SELECTION AND SPECIFICATION METHODOLOGY Author(s): Kin Chung Liu, Dennis F. Kehoe and Dong Li Abstract: Selection of technology in IT projects is recognized as a multi-criteria decision-making (MDCM) problem because it is important to incorporate multiple opinions from people and consider the interdependence among criteria (Lee and Kim, 2000). Various techniques were proposed to address the technology selection problems and some of them, such as analytic hierarchy process (AHP) (e.g. Bard, 1986), were proved successful in literatures. However, technology selection problem in a system development project can be viewed as a system design activity and there is lack of literatures propose a system design framework or methodology to integrate the technology selection with other system design activity. From a system design perspective, the research argues that AHP can be applied to generate technology specification and other useful information for system design purpose, in additions of technology selection. A high-level system design framework and the FAHP-based technology specification methodology are presented in this paper. Title: TOWARD A HYBRID ALGORITHM FOR WORKFLOW GRAPH STRUCTURAL VERIFICATION Author(s): Karim Baïna and Fodé Touré Abstract: Enterprise business procedures can be formalized by process models which must be correct in the sense that they do not affect negatively the functional objectives. The appropriate definition, analysis, checking and improvement of these models are indispensable before the deployment of the process model using a workflow management system. In this paper we present a new hybrid algorithm of workflow graph structural validation combining graph reduction and traversal mechanisms. Our algorithm will be compared to existing workflow structural checking approaches. Title: INFORMATION SYSTEM ENGINEERING FOR AN ELECTRICITY DISTRIBUTION COMPANY Author(s): Diane Asensio, Abdelaziz Khadraoui and Michel Léonard Abstract: Information Systems (IS) of electricity distribution companies are often composed of various and heterogeneous applications or systems that need to exchange information. Distribution activities evolve in different contexts and environments, and the interoperability is a particular difficult task to realize for electricity distribution companies. A laws-based IS engineering approach is proposed in our research laboratory. Given the distribution activities are regulated by a legal framework, we propose to apply this approach to the electricity distribution domain. Title: AS-IS CONTINUOUS REPRESENTATION IN ORGANIZATIONAL ENGINEERING Author(s): Nuno Castela and José Tribolet Abstract: If the organizational model was a trustworthy and updated representation of the organizations, in all of its aspects and perspectives, it could be permanently used as support base to most operational and management tasks, using all its capacities for capturing, representing and distributing organizational knowledge. However, this model has been used mainly to support organizational activities isolated in time, instead of constitute a solid foundation to support the organizational daily activities, acting as an organizational knowledge repository. This fact result mainly of the difficulty to maintain the model updated and aligned with the reality. The research in organizational engineering is already mature in defining modeling artefacts, modelling languages and the necessary views to adequate the model to the users, promoting it usage in a continuous baseline. This paper propose a process to maintain the organizational as-is model updated. The strategy presented in this paper considers the organizational model as a representation of the organizational conscience, continuously aligned with the reality. Title: STRATEGIC INFORMATION REQUIREMENTS ELICITATION - DEFINITION OF AGGREGATED BUSINESS ENTITIES Author(s): Gianmario Motta and Giovanni Pignatelli Abstract: This paper presents a universal meta-model for Strategic Information Requirements Elicitation and a methodology to generate and use strategic information models. The framework fits the methodological gap that exists in the Strategic Information Requirements and supports the analyst in (a) define structured high level information requirements and (b) assess informational support from a variety of perspectives. The meta-model enhance the e-TOM Aggregate Business Entity concept by the adding the concepts of specialization and decomposition. The methodology uses several perspective to assure the robustness of information requirements, their coverage on the IT infrastructure and the ownership of information. Specifically the methodology includes various steps, namely the selection, customization, refinement and validation of the ABEs, evaluation of the informative support and sensitivity analysis. The model can be used for analysis, audit and strategic planning and may be leaned to CASE tools. Title: AUTOMATING TEST CASE GENERATION FOR REQUIREMENTS SPECIFICATION FOR PROCESSES ORCHESTRATING WEB SERVICES Author(s): Krzysztof Sapiecha and Damian Grela Abstract: The research concerns generation of test cases for processes defined in BPEL. In the paper it is showed that under some assumptions BPEL process may be considered as an embedded system in which tasks are like services and communication between tasks is like coordination of the services according to task graph of the system. Following this analogy a new method for automating test case generation for requirements specification for BPEL processes is given. An example illustrating the method is presented. Title: APPLYING MDA TO GAME SOFTWARE DEVELOPMENT Author(s): Takashi Inoue and Yoshiyuki Shinkawa Abstract: Game software becomes more and more complex, according as the game platforms are improved or the requirements of game users become sophisticated, and so does the development of game software. However, there are few established development methodologies for game software development, and it decreases the productivity of the development. One of the solutions for this problem is to apply modeling technologies to it as we do to other application areas, and through modeling we can anticipate productivity improvement. This paper evaluates the applicability and suitability of MDA (Model driven Architecture) to game software development, along with establishing a UML modeling process for typical game categories. Title: EXTRACTING CLASS STRUCTURE BASED ON FISHBONE DIAGRAMS Author(s): Makoto Shigemitsu and Yoshiyuki Shinkawa Abstract: Current software development methodologies usually assume the existence of definite rules and processes in target problem domains. However, in the software development for non-routine applications, this assumption might decrease the productivity, and makes it difficult to identify the optimal solutions. The paper proposes a development method for such software development using fishbone diagrams in order to analyze the requirements of stake holders, which can finally derive UML diagrams from the cause-result structure defined by the fishbone diagrams. The method could improve the productivity of the above development, creating high quality software specifications. We also show a case study on developing education assistance software using the proposed method. Title: IMPROVING REQUIREMENTS ANALYSIS - RIGOUROUS PROBLEM ANALYSIS AND FORMULATION WITH COLOURED COGNITIVE MAPS Author(s): John R. Venable Abstract: This position paper argues that methodical and rigourous attention to problem formulation should be an essential part of requirements analysis and proposes a method for modelling problems and potential solutions. Currently, most IS Development Methodologies ignore the issue of problem formulation. In practice, most IS development projects ignore or pay little attention to this issue. In this paper we argue that the resulting lack of proper problem analysis and formulation is a major cause of IS failures. In place of this lack of attention, this paper proposes and reports on research in progress on the development and evaluation of a new technique, Coloured Cognitive Maps (CCM), for use in problem formulation at the early stages of Information Systems Development. The notation of coloured cognitive maps and a procedure to use them for problem analysis and solution derivation are briefly described. Initial anecdotal results in employing CCM by students and practitioners are briefly described. Area 4 - Software Agents and Internet Computing Title: AN ADAPTIVE AND DEPENDABLE SYSTEM CONSIDERING INTERDEPENDENCE BETWEEN AGENTS Author(s): Keinosuke Matsumoto, Tomoaki Maruo, Akifumi Tanimoto and Naoki Abstract: A multiagent system (MAS) has recently gained public attention as a method to solve competition and cooperation in distributed systems. However, MAS’s vulnerability due to the propagation of failures prevents it from being applied to a large-scale system. This paper proposes an approach to improve the efficiency and reliability of distributed systems. The approach monitors messages between agents to detect undesirable behaviors such as failures. Collecting the information, the system generates global information of interdependence between agents and expresses it in a graph. This interdependence graph enables us to detect or predict undesirable behaviors. This paper also shows that the system can optimize performance of a MAS and improve adaptively its reliability under complicated and dynamic environment by applying the global information acquired from the interdependence graph to a replication system. The advantages of the proposed method are illustrated through a simulation experiment based on a virtual auction market. Title: INTEGRATING A STANDARD COMMUNICATION PROTOCOL INTO AN E-COMMERCE ENVIRONMENT BASED ON INTELLIGENT AGENTS Author(s): D. Vallejo, J. J. Castro-Schez, J. Albusac-Jimenez and C. Glez-Morcillo Abstract: Communication among intelligent agents which take part in negotiation protocols is not frequently considered as an important topic in the negotiation stage. These mechanisms are often designed for specific problems, without taking into account the interoperability among agents involved. In this paper, we suggest a standard communication protocol to ensure the interoperability among intelligent agents based on fuzzy logic in an E-Commerce environment. Moreover, this environment is integrated into a FIPA-compliant multi-agent system, formalizing the communication among agents at message and payload levels. The standard messages and the protocols used are described in depth, together with a detailed example. Title: CONSUMER-TO-CONSUMER TRUST IN E-COMMERCE - ARE THERE RULES FOR WRITING HELPFUL PRODUCT REVIEWS Author(s): Georg Peters, Matthias Damm and Richard Weber Abstract: Since the emergence of the Internet online shopping has rapidly grown and replaced parts of traditional face-to-face shopping in real shops in cities and shopping centres. The role of the sales assistant has been supplemented or even taken over by online information like buyers guides, product reviews or product related discussion groups. For example, Amazon offers its customers the possibility to write product reviews which will be published on the product site. However, a potential buyer is confronted with a similar problem as in physical shops: Can I trust the recommendation of the sales assistant in a physical shop respectively can I trust the recommendations - the product reviews of former buyers published by the Internet shop. So, at Amazon's readers of the product reviews can classify a review as helpful or not. In our paper we analyse if there are relationships between the formal structure of a product review and the degree readers classify a review as helpful. We present the results of a case study on the Germany's Amazon shop and derive "writing rule" for good product reviews out of our analysis. Title: USE OF SEMANTIC TECHNOLOGY TO DESCRIBE AND REASON ABOUT COMMUNICATION PROTOCOLS Author(s): Miren I. Bagüés, Idoia Berges, Jesús Bermúdez, Alfredo Goñi and Arantza Illarramendi Abstract: Nowadays there is a tendency to enhance the functionality of Information Systems by appropriate information agents. Those information agents communicate through communication acts expressed in an Agent Communication Language. Moreover, the aim is to achieve interoperation of those agents through standard communication protocols in a distributed environment such as that supported by the Semantic Web. In this paper we present a proposal to describe those protocols using a Semantic Web language. Two are the main features of that proposal. On the one hand, the communication acts that appear in the communication protocols are described by terms belonging to a communication acts ontology called COMMONT. On the other hand, protocols are represented by state transition systems described using OWL-DL language. This type of description provides the means to reason about the communication protocols in such a way that several kinds of structural relationships can be detected, namely if a protocol is a prefix, a suffix or an infix of another protocol and that relationship taken in a sense of equivalence or specialization. Furthermore, equivalence and specialization relationships can also be detected for complete protocols. Those relationships are captured by subsumption of classes described with a Semantic Web language. Title: FRAMEWORK FOR BUILDING OF A DYNAMIC USER COMMUNITY (EPH) - SHARING OF CONTEXT-AWARE, PUBLIC INTEREST INFORMATION OR KNOWLEDGE THROUGH ALWAYS-ON SERVICES Author(s): Monica Vlădoiu and Zoran Constantinescu Abstract: ePH aims to be a framework around a user-centred digital library core that stores regional information and knowledge and that boosts a self-developing community of transnational users around it. The digital library's content will be accessible through always-on context-aware services (location-based included). The users can get it or enhance it, according to their location: at home or office by using a computer, on road with a specific GPS-based device in the car, or off-line, off-road via mobile phone. The digital library will contain: public interest information (drugstores, hospitals, general stores, gas stations, entertainment places, restaurants, travel/accommodation, weather, routes etc.), historical/touristic/cultural information and knowledge, users' personal "war stories" (tracks, touristic tours, impressions, photos, short videos and so on), and users’ additions, comments or updates to the content. This content will come alive to ePH users based on their contextual interest (e.g. geo-location). Our plan is to develop (as open source) the ePH system for our county of origin and to provide an easy-to-use how-to recipe to clone it for other regions. Title: AGENT-ORIENTED COMPUTING FOR BUSINESS PROCESS MANAGEMENT – WHAT, WHY AND HOW Author(s): Minhong Wang, Kuldeep Kumar and Weijia Ran Abstract: Increased attentions are paid to business process management for improving organizational efficiency and effectiveness. In addition to workflow technology, Agent-Oriented Computing (AOC) is addressed for seeking significant advances on process management approaches in complex situations. However, recent research on this problem is experience-driven, ad-hoc, and without a cohesive theoretical base. The objective of this work is to examine the key problem of complex process management and the mechanisms of Agent-Oriented Computing, with a view to providing guideline and implications for designing and managing complex business processes. Title: ONLINE SHOPPING AGENTS: AN EXPLORATORY STUDY OF USERS’ PERCEPTIONS OF SERVICE QUALITY Author(s): Călin Gurău Abstract: The use of online shopping agents has increased dramatically in the last 10 years, as a result of e-commerce development. Despite the importance of these online applications, very few studies attempted to identify and analyse the main factors that influence the users’ perception regarding the service quality of online shopping agents, and consequently, the elements that determine the users’ choice of online shopping agents. The present study attempts to fill this literature gap, identifying on the basis of primary data analysis, the various circumstantial or personal factors that can determine the choice of a specific searching strategy and shopping agent. Title: USING PREPAID CARDS IN E-BUSINESS - LIBREAIRE, AN E-BOOKS ONLINE STORE Author(s): Omar Nouali and Abdelghani Krinah Abstract: In this paper, we present an electronic business application and its related architecture. The developed system consists of a sample e-books sells website that involves shopping in a virtual store. The framework has been conceived as a prototype system which illustrates how Java technology especially Servlets can assist developers on building and deploying e-commerce applications. The problematic of E-payment is also discussed along with selection of the most adequate solution to the proposed architecture. The adopted model allows for business process logic to be handled at the server side, in a simple and secured way. Title: SOPPA - SERVICE ORIENTED P2P FRAMEWORK FOR DIGITAL LIBRARIES Author(s): Marco Fernandes, Pedro Almeida, Joaquim A. Martins and Joaquim S. Pinto Abstract: P2P and SOA paradigms provide new opportunities for the development of new digital libraries and the redesign of existing ones. This paper describes the work being conducted to create a framework for the creation of digital libraries, which relies on a P2P network and service-enabled peers to achieve high modularity, reusability and performance in dynamic environments. While this framework supports data and metadata storage and management, this paper focuses on its service oriented approach. Title: IMPROVING PERFORMANCE OF BACKGROUND JOBS IN DIGITAL LIBRARIES USING GRID COMPUTING Author(s): Marco Fernandes, Pedro Almeida, Joaquim A. Martins and Joaquim S. Pinto Abstract: Digital libraries are responsible for maintaining very large amounts of data, which must typically be processed for storage and dissemination purposes. Since these background tasks are usually very demanding regarding CPU and memory usage, especially when dealing with multimedia files, centralized architectures may suffer from performance issues, affecting the responsiveness to users. In this paper, we propose a grid service layer for the execution of background jobs in digital libraries. A case study is also presented, where we evaluate the performance of a grid application. Title: AN INTER-ORGANIZATIONAL PEER-TO-PEER WORKFLOW MANAGEMENT SYSTEM - P2P BASED VIRTUAL ORGANIZATION CONCEPT Author(s): A. Aldeeb, K. Crockett and M. J. Stanton Abstract: Current inter-organizational cooperation technologies and approaches do not adequately support cross-organizational workflow. These approaches concentrate on automating the public workflow in isolation from the internal workflow management systems inside the cooperating organization. Integrating Peer-to-peer (P2P) and workflow technology enables virtual enterprises to dynamically form and dismantle partnerships between organizations workflow management systems. In addition, P2P workflow based systems support various forms of workflow interoperability e. g capacity sharing, chained execution, subcontracting, case transfer, loosely coupled and public to private approach. This paper describes a novel peer-to-peer inter-organizational workflow management framework (P2P inter-org WFMS), which includes workflow advertisement, workflow interconnection, and workflow cooperation. Each organization acts as a workflow peer (WFP) in a virtual enterprise. The proposed system utilizes the advanced P2P networking features; collaboration capabilities and resource sharing. Sun Microsystems’s JXTA P2P networking environment is used for prototype implementation. XPDL (XML Process Definition Languages) is used for process definition as it offers portability between different Process Design tools. The internal WFMS in each organization is being implemented using TIBCO Business Studio ™. Title: THE DECENTRALIZED DATACENTER IN THE AGE OF SOA Author(s): Jackson He, Enrique Castro-Leon and Simón Viñals Larruga Abstract: The adoption of SOA in business computing environments is growing due to the promise of significant cost reduction in the planning, deployment and operation of IT projects. However, the organic transformation from legacy enterprise applications to SOA applications only has been seen mostly in large enterprises datacenter where services are centralized. This paper discusses the effect of SOA, and specifically the effect of Outside-In SOA previously defined in the increasingly decentralized way in which data centers are deployed and managed. Patterns of adoption of SOA, in combination with emerging technologies lead us to believe that the traditional datacenter owned by a large organization, i.e., the traditional monolithic data center, will evolve into a more federated form, with horizontal specialization creating opportunities for smaller players and emerging economies. The decentralization of the physical datacenter will take place through federated services offered from distributed service providers. This new dynamic will affect large and small business alike. Title: USING BIT-EFFICIENT XML TO OPTIMIZE DATA TRANSFER OF XFORMS-BASED MOBILE SERVICES Author(s): Jaakko Kangasharju and Oskari Koskimies Abstract: We consider here the case of XForms applications on small mobile devices. The aim is to find out whether a schema-aware binary XML format is better suited to this area than generic compression applied to regular XML. We begin by limiting the potential areas of improvement through considering the features of the binary format, and then proceed to measure effects in the identified areas to determine whether a binary format would be effective. Title: A NOVEL APPROACH TO MODEL AND EVALUATE DYNAMIC AGILITY IN SUPPLY CHAINS Author(s): Vipul Jain and Lyès Benyoucef Abstract: In this paper, we propose a novel approach to model agility and introduce Dynamic Agility Index (DALi) through fuzzy intelligent agents. Generally, it is difficult to emulate human decision making if the recommendations of the agents are provided as crisp, numerical values. The multiple intelligent agents used in this study communicate their recommendation as fuzzy numbers to accommodate ambiguity in the opinion and the data used for modeling agility attributes for integrated supply chains. Moreover, when agents operate based on different criteria pertaining to agility like flexibility, profitability, quality, innovativeness, pro-activity, speed of response, cost, robustness etc for integrated supply chains, the ranking and aggregation of these fuzzy opinions to arrive at a consensus is complex. The proposed fuzzy intelligent agents approach provides a unique and unprecedented attempt to determine consensus in these fuzzy opinions and effectively model dynamic agility. The efficacy of the proposed approach is demonstrated with the help of an illustrative example. Title: AN OVERVIEW OF PLC NETWORK PERFORMANCE IN PRESENCE OF HOUSEHOLDS INTERFERENCE Author(s): Haroldo Zattar, Jakson Bonaldo and Gilberto Carrijo Abstract: During this decade, the telecommunication companies have been suffering to provide internet access into rural areas, poor neighborhood and remote areas. In Brazil less than 10% of Brazilian houses have broadband and ADSL system is the dominant one. An alternative solution for the realization of the access networks is offered by Power Line Communication (PLC) systems that nowadays present many advantages. Therefore, applications of PLC in low-voltage supply networks seem to be a cost-effective solution for so-called “last mile” communication networks, belonging to the access area. Problems as noise levels, cable attenuation at PLC channel frequency and electromagnetic compatibility must have a deep study to improve this technology. Therefore, this paper presents an overview of PLC. In addition the paper shows the performance of a PLC using an experimental LAN network in presence of different types of noise caused by households. This paper also presents the experimental results to find the best TCP window size to be used in a PLC network. The performance investigation was accomplished in the Telecommunication laboratory of Federal University of Mato Grosso. Title: WEB 2.0 MASHUPS FOR CONTEXTUALIZATION, FLEXIBILITY, PRAGMATISM, AND ROBUSTNESS Author(s): Thorsten Hampel, Tomáš Pitner and Jonas Schulte Abstract: In current Web 2.0 developments the word "mashups" stands for a totally new way of flexible application bringing together several Web 2.0 services. Mashups form a serious new trend in web development by contextualizing existing services and bringing customizable applications to the user. Out of this, a new style of web-based applications arises. To get a better understanding of the phenomena of mashups, this paper observes and interprets the way the development of web based applications and appropriate background systems changed. It presents several state of the art Web~2.0 technologies and points out why the evaluation of Web~2.0 applications currently moves from web-based APIs to web services. We will present a short taxonomy of mashups as flexible, pragmatic, and robust web applications. Further, their new form of flexibility and customization will be illustrated with the help of our mistel Framework. Title: A WEIGHTED APPROACH FOR OPTIMISED REASONING FOR PERVASIVE SERVICE DISCOVERY USING SEMANTICS AND CONTEXT Author(s): Luke Steller, Shonali Krishnaswamy, Simon Cuce, Jan Newmarch and Seng Loke Abstract: There is an increased imperative for the development of pervasive service discovery architectures, to aid mobile users for time critical tasks in service rich environments. To achieve this, discovery architectures must be context-aware and service semantics must be captured to match services with requests, on meaning not syntax. In order to cater for ad-hoc environments and user privacy semantic reasoning must be done on pervasive devices, which are resource constrained. However, current semantic reasoners are resource intensive and designed for desktop environments. There is a need for a reasoning approach which adaptively manages the trade-off between computation time and result precision such that semantic approaches can be scaled to small devices. We have developed several optimisation and branch ranking algorithms for pervasive reasoning, as a first step toward this goal. We then provide performance evaluation of our approach. Title: EFFICIENT NEIGHBOURHOOD ESTIMATION FOR RECOMMENDATION MAKING Author(s): Li-Tung Weng, Yue Xu, Yuefeng Li and Richi Nayak Abstract: Recommender systems produce personalized product recommendations during a live customer interaction, and they have achieved widespread success in e-commerce nowadays. For many recommender systems, especially the collaborative filtering based ones, neighbourhood formation is an essential algorithm component. Because in order for collaborative-filtering based recommender to make a recommendation, it is required to form a set of users sharing similar interests to the target user. “Best-k-neighbours” is a popular neighbourhood formation technique commonly used by recommender systems, however as tremendous growth of customers and products in recent years, the computation efficiency become one of the key challenges for recommender systems. Forming neighbourhood by going through all neighbours in the dataset is not desirable for large datasets containing million items and users. In this paper, we presented a novel neighbourhood estimation method which is both memory and computation efficient. Moreover, the proposed technique also leverages the common “fixed-n-neighbours” problem for standard “best-k-neighbours” techniques, therefore allows better recommendation quality for recommenders. We combined the proposed technique with a taxonomy-driven product recommender, and in our experiment, both time efficiency and recommendation quality of the recommender are improved. Title: WEB INFORMATION RECOMMENDATION MAKING BASED ON ITEM TAXONOMY Author(s): Li-Tung Weng, Yue Xu, Yuefeng Li, Yuefeng Li and Richi Nayak Abstract: Recommender systems have been widely applied in the domain of ecommerce. They have caught much research attention in recent years. They make recommendations to users by exploiting past users’ item preferences, thus eliminating the needs for users to form their queries explicitly. However, recommender systems’ performance can be easily affected when there are no sufficient item preferences data provided by previous users. This problem is commonly referred to as cold-start problem. This paper suggests another information source, item taxonomies, in addition to item preferences for assisting recommendation making. Item taxonomy information has been popularly applied in diverse ecommerce domains for product or content classification, and therefore can be easily obtained and adapted by recommender systems. In this paper, we investigate the implicit relations between users’ item preferences and taxonomic preferences, suggest and verify using information gain that users who share similar item preferences may also share similar taxonomic preferences. Under this assumption, a novel recommendation technique is proposed that combines the users’ item preferences and the additional taxonomic preferences together to make better quality recommendations as well as alleviate the cold-start problem. Empirical evaluations to this approach are conducted and the results show that the proposed technique outperforms other existing techniques in both recommendation quality and computation efficiency. Title: PARTNER SELECTION FOR VIRTUAL ORGANIZATION - SUPPORTING THE MODERATOR IN BUSINESS NETWORKS Author(s): Heiko Thimm, Kathrin Thimm and Karsten Boye Rasmussen Abstract: More and more companies improve their sustainability by belonging to company networks. When a specific inquiry is directed towards the company network required is a selection of companies that partner- up in a mixture of supplementing or even overlapping companies to form a virtual organization. The selection of partners is a challenging task for the network moderator and poor results - producing less value for the participating selected companies - as well as unintelligible results - that confuse the companies - can jeopardize the network. Therefore we outline an web-based solution where the information technology supports the moderator by providing not only an optimized solution of the selected companies but also makes the management process transparent and traceable thus building a firmer future ground for the network. Title: LEARNING TECHNOLOGY SYSTEM ARCHITECTURE BASED ON AGENTS AND SEMANTIC WEB Author(s): Alejandro Canales and Rubén Peredo Abstract: The paper presents a new Learning Technology System Architecture that is implemented using agents and semantic Web. The architecture is divided into client and server parts to facilitate adaptivity in various configurations such as online, offline and mobile scenarios. The implementation of this approach to architecture is discusses in SiDeC (authoring tool) and Evaluation System, which are researching, evaluating and developing an integrated system for Web-Based Education, with powerful adaptivity for the management, authoring, delivery and monitoring of such material. Title: AUTOMATIC GENERATION OF SEMANTIC ANNOTATIONS FOR SUPPORTING TECHNOLOGY MONITORING ON THE WEB Author(s): Tuan-Dung Cao, Rose Dieng-Kuntz, Marc Bourdeau and Bruno Fiès Abstract: The advent of Semantic Web technologies promises intelligent access to Web information through the use of semantic annotations based on relevant ontologies. Automatic generation of semantic annotations of Web document can be useful to support technology monitoring on the Web. In this article, after describing our approach based on Semantic Web technologies for building the OntoWatch technology monitoring system, we present the main components on which it relies: an ontology dedicated to watch and on an ontology-based algorithm for automatic search and annotation of Web documents. Title: IMPROVING THE LOCATION OF MOBILE AGENTS ON SELF-AWARE NETWORKS Author(s): Ricardo Lent Abstract: Business applications are evolving on the Internet from centralised to distributed architectures demanding higher quality group communication facilities. Such communications are often time sensitive and prone to disruptions or unacceptable latency because of the high variability of Internet traffic and capacity across the network. Self-aware networks offer mesh networking facilities with adaptive routing able to cope with changes in network conditions, such as variations in traffic load and link or node failure. Further quality improvement can be achieved by relaying critical application tasks to software agents able to migrate to alternate hosts and take advantage of new location facilities for their communications. The paper examines the concurrent use of self-aware networking and software agent mobility to offer improved communication facilities to time-critical group communications. A host selection algorithm for agent migration is proposed to find suitable locations for agents and evaluated in a simulation study. Title: WHAT CAN CONTEXT DO FOR TRUST IN MANETS? Author(s): Eymen Ben Bnina, Olivier Camp, Chùng Tiến Nguyễn and Hella Kaffel Ben Ayed Abstract: The global performance of a mobile ad hoc networks (manet) greatly depends on, both, the cooperation of participating nodes and the environment in which the nodes evolve. The willingness of a node to cooperate can be illustrated by the trust assessed to the node. Yet, existing trust models, designed for reliable wired networks, do not take into consideration possible communication failures between client and server. We believe, that in the case of ad hoc networks such factors should be considered when computing trust. In this article, we show how an interaction can be decomoposed in three separate phases : two communication phases for transporting the request to the server and the response back to the client, and one execution phase which represents the actual execution of the service by the server. We propose to define the communication environment using contextual attributes and to consider this context when assessing trust to a server. We discuss the possible uses of context in the field of trust computation in manets and define contextual attributes that seem important to consider when modelling and computing trust. Title: DEVELOP ADAPTIVE WORKPLACE E-LEARNING ENVIRONMENTS BY USING PERFORMANCE MEASUREMENT SYSTEMS Author(s): Weijia Ran and Minhong Wang Abstract: Workplace learning is an environmental contextual and dynamic procedure. Needs and desires in workplace learning arise from actions and practices in working environment; learning contents consist of explicit and tacit knowledge dynamically created and intertwined with working and practicing. The development of an effective workplace E-learning system is faced with several problems: 1) How to specify and update learning needs and desires in contextual and dynamic workplace settings? 2) How to activate and formalize knowledge sharing and contribution procedure for collecting knowledge emerged during practices in working communities? 3) How to organize and store knowledge pieces in a way that reflects workplace learning needs and supports adaptive learning content delivery? 4) How to incessantly update and adjust learning content to keep up with the changing working context? In order to solve these problems, we propose an adaptive workplace learning model, in which the performance measurement is used as an indication of working proficiency, a reflection of learning needs, and a sign of the level and quality of knowledge shared and contributed for achieving specific performance, with a view to organizing learning contents and effectively guiding learning and knowledge sharing process. Title: A MULTIAGENT SYSTEM FOR JOB-SHOP SCHEDULING Author(s): Claudio Cubillos, Leonardo Espinoza and Nibaldo Rodriguez Abstract: The present work details the experience on the design and implementation of a multiagent architecture devoted to a dynamic Job Shop setting using the PASSI methodology. The system has been modeled with the PASSI Toolkit (PTK) and implemented over the Jade agent platform. The agent system is in charge of the planning and scheduling of jobs and their corresponding operations on a set of available machines, while considering the materials assigned to each operation. Dynamicity concerns job orders scheduling on-the-fly and the re-schedule caused by changes to the original plan due to clients, machines or material stocks. Title: CONTEXTUAL SEMANTIC SEARCH - CAPTURING AND USING THE USER’S CONTEXT TO DIRECT SEMANTIC SEARCH Author(s): Caio Stein D’Agostini, Renato Fileto, Mário Antônio Ribeiro Dantas and Fernando Alvaro Ostuni Gauthier Abstract: One orthodox perspective of the semantic Web depends on stablishing a consensual users’ view for an universe of discourse. However this expectation is unrealistic, because different people have different views of the world, making it impossible to devise a uniﬁed knowledge view that satisﬁes everyone in certain cases. One solution for this problem is to allow different users’ views of some knowledge assembled in an ontology, and keep track of the mappings between these views and the underlying ontology. This work proposes to collect context information from the users interactions with a semantic search system, in order to gradually build individual users’ views mapped to an ontology. This approach allows the user to pose queries based on keywords or his personal knowledge view. In addition, each personalized knowledge view captures the preferences of a user, enabling the system to provide better search results, based on its previous experience with that user. Title: GET READY FOR MASHABILITY! CONCEPTS FOR WEB 2.0 SERVICE INTEGRATION Author(s): Pavel Drášil, Tomáš Pitner, Thorsten Hampel and Marc Steinbring Abstract: Mashups represent - beside the ease-of-use, high interactivity and social networking factors - another significant phenomenon of Web 2.0. However, this "mashing process" is mostly based upon ad-hoc approaches and techniques, rather than upon an in-depth analysis of the "mashing potential" of the services. Therefore, the first goal of this paper is to provide a conceptual foundation for the mashups - to identify services being typically integrated into mashups, and propose classification criteria forming a "Mash-Tree" that were subsequently applied to selected services - representatives of each classification category. Secondly, the mashing potential is studied from three perspectives - technical aspects of mashability, business models for mashups, and potential legal issues concerning services themselves as well as the user-created data in them. We hope that the proposed mashup conceptualization together with our analysis of mashability can help to develop future Web 2.0 mashups that not only better meet stakeholders' expectations but also respect legal terms, and are technologically sound. Title: FLIGHT SIMULATION ENVIRONMENTS APPLIED TO AGENT-BASED AUTONOMOUS UAVS Author(s): Ricardo Gimenes, Daniel Castro Silva, Luís Paulo Reis and Eugénio Oliveira Abstract: Developed countries have made significant efforts to integrate Unmanned Aerial Vehicle (UAV) operations in controlled aerial space due to a rising interest of using UAVs for civilian as well as military purposes. This paper focuses on looking for reliable solutions and a way to validate an autonomous multiagent control system through a different variety of Flight Simulations. This study has two main lines, the first being the use of multiagent systems in UAVs, aiming at a fully autonomous control. The second and focal line is a survey about the variety of simulation systems dedicated to aerodynamics and aircrafts, comparing them and their feasibility to validate the developed multiagent control system. A comparative study of existing simulation environments is thus presented, both commercial simulation game engines and research simulators. One critical factor is hazard situations, like emergency landing without runway or equipment failure, which should be predicted in an automated system. Most flight simulators are not realistic enough to validate a multiagent algorithm in hazard situations. At the same time, it is impossible to predict every type of failure in real world. The boundaries of simulation should be very well enclosured in order to present results using simulation. Title: MAKING USE OF MOBILE DEVICES IN E-COMMERCE - OVERCOMING ORGANIZATIONAL BARRIERS THROUGH USER PARTICIPATION Author(s): Jan vom Brocke, Bettina Thurnher and Dietmar Winkler Abstract: Mobile devices offer great potentials for the design of business processes. However, realizing these potentials in practice is still problematic. While technologies are nowadays widely available, the problems still lie in the management of organizational change. In this paper, we analyze the contribution of user participation to the successful implementation of mobile business processes. We present the results of five case studies conducted in the IT-Service sector. The work gives empirical evidence that user participation (a) leads to reduced adoption and transition barriers and (b) improvements of business metrics. Title: EVALUATION OF SEARCH ENGINE OPTIMIZATION - EXPERIMENTS FOR A BUSINESS SITE AND A NEW EVALUATION MEASURE Author(s): Julia Maria Schulz, Ralph Kölle, Christa Womser-Hacker and Thomas Mandl Abstract: This paper reports experiments on search engine optimization for a business site. Several search terms have been optimized for three web search engines. From the business site, 300 pages have been selected for optimization. In three phases several on- and off-page modifications have been carried out and the results have been monitored. The results show that search engines do react to modifications and that the target pages are ranked higher on average. The variance of the improvements is extremely large which means that there is no guarantee that SEO activities work for one single page. We suggest a new evaluation measure which takes typical search engine user behavior into account. Title: ENHANCING ENTERPRISE COLLABORATION USING CONTEXT-AWARE SERVICE BASED ON ASPECTS Author(s): Khouloud Boukadi, Chirine Ghedira and Lucien Vincent Abstract: In fast changing markets, dynamic collaboration ability involves establishing and "enacting" business relationships in an adaptive way taking into account context changes. This relies on using adaptable and flexible IT platforms. Service orientation can address this challenge. Accordingly, collaborative processes can be implemented as a composition of a set of services. However, combining "directly" elementary IT services is a hard task and presents risks in both service provider and user sides. In this paper we present a high-level structure called Service Domain which orchestrates a set a of related IT services based on BPEL specifications. To ensure the Service Domain adaptability to context changes our approach aims to prove the benefits of bringing Aspect Oriented Programming. Title: TRUSTED INFORMATION PROCESSES IN B2B NETWORKS Author(s): Chintan Doshi and Liam Peyton Abstract: The design, implementation and management of inter-organizational business processes that operate across the Internet have to address a number of issues that do not normally arise for business processes that operate solely within an organization. A framework is needed which supports traditional business process management and which also has the technical infrastructure in place to address federated identity management, privacy compliance and performance management. In this paper, we examine how this can be accomplished in an architecture with built in event logging and privacy auditing that deploys processes defined in the Business Process Execution Language standard (BPEL) into a "Circle of Trust" (CoT) architecture as specified by the Liberty Alliance standard for federated identity management. A sample business process scenario is implemented in the proposed framework and evaluated. Title: A SOA-BASED MULTI-AGENT APPLICATION FOR THE CROSSLINGUAL COMMUNICATION PROBLEM Author(s): Hércules Antonio do Prado, Aluizio Haendchen Filho, Míriam Sayão and Fénelon do Nascimento Neto Abstract: We present a multi-agent system (MAS) approach to deal with the crosslingual communication problem in communities with disparate levels of language. Service-oriented architecture (SOA) is adopted as the basis for designing our proposal that has double purpose: (a) to present an alternative to the crosslingual commu-nication and (b) to study an agent organization under the SOA specifications, making evident how the ser-vice-oriented approach can make the MAS development much easier. A case is presented in which the communication among food consumers and experts in food quality and safety is carried out by means of agents organizations. We specify two basic sets of agents: application agents, responsible for the logical in-formation flow inside the system, and software agents that are in charge of persistency, interface, and the communication among the software artifacts. Title: MODULARIZATION OF WEB-BASED COLLABORATION SYSTEMS FOR MANUFACTURING INNOVATION Author(s): Kwangyeol Ryu, Seokwoo Lee and Honzong Choi Abstract: Unpredictable customer needs stronly require for manufacturering enterprises to produce quality products satisfying cost and time constraints. To cope with such dynamically changing manufacturing environment and to get higher competitiveness, the manufacturing industry needs to be equipped with advanced technologies including IT as well as substantial infrastructure. On the one hand “i-Manufacturing” is the name of the project funded by the Korea government, but on the other it is the strategy for achieving manufacturing innovation in Korea. The most basic but important concept of the i-Manufacturing is “collaboration”. As a part of the i-Manufacturing project, we are developing various kinds of web-based collaboration systems, referred to as hub systems. Along with increase in the number of collaboration systems and users every year, we have to modularize function modules for easy and synthetic application of systems to other conglomerates or industries. Here, collaboration systems we developed are currently being used by more than 300 companies in Korea. In this paper, therefore, we first introduce the i-Manufacturing project and collaboration systems we have developed. The system arechitecture and composition of function modules which has multi-level framework will be described in detail before concluding the paper. Title: INTEGRATION ARCHITECTURES BASED ON SEMANTIC WEB SERVICES: FAD OR MODEL FOR THE FUTURE? - FINDINGS OF A COMPREHENSIVE SWOT ANALYSIS Author(s): Daniel Bachlechner Abstract: Web services brought about a revolution by taking a remarkable step towards seamless integration of distributed software components. The importance of Web services as a cornerstone of service-oriented integration architectures is recognized and widely accepted by experts from industry and academia. Current Web service technology, however, operates at the syntactic level and hence, still requires human interaction to a large extent. Semantic Web services pledge the automation of core Web service tasks such as discovery, selection, composition and execution, thus enabling interoperation between systems, keeping human intervention to a minimum. Within the scope of this work, we discuss the capabilities of integration architectures based on Semantic Web services as well as relevant environmental factors. The discourse is based on the findings of a comprehensive SWOT analysis which was conducted in early 2007. In order to assess the relevance and applicability of integration architectures based on Semantic Web services in an organisational context best, particular importance was attached to differences in the viewpoints of practitioners and researchers, respectively. Title: COLLABORATION ACROSS THE ENTERPRISE Author(s): Aggelos Liapis, Stijn Christiaens and Pieter De Leenheer Abstract: In the current competitive industrial context, enterprises must react swiftly to market changes. In order to face this problem, enterprises must increase their collaborative activities. This implies at one hand high communication between their information systems and at the other hand the compatibility of their practices. An important amount of work must be performed towards proper practices of standardization and harmonization. This is the concept of Interoperability. Interoperability of enterprises is a strategic issue, caused as well as enabled by the continuously growing ability of integration of new legacy and evolving systems, in particular in the context of networked organisations. To implement useful interoperability, re-establishing the correct meaning (semantics) of communicated business information is essential as well as crucial to success. For this, non-disruptive re-use of existing business data stored in “legacy” production information systems is an evident prerequisite. In addition the integration of a methodology as well as the scalability of any proposed semantic technological solution are equally evident prerequisites. Yet on all accounts current semantic technologies as researched and developed for the so-called semantic web may be found lacking. Still, semantic technology is claimed about to become mainstream, as it rapidly becomes driven by enterprise interoperation needs and increasing availability of domain specific content (for example ontologies) rather than by basic technology (for example OWL) providers. In this paper we present a methodology which has resulted in the implementation of a highly customizable collaborative environment focussed to support the area of enterprise interoperability. The main benefit of this environment is its ability to integrate with legacy systems, rescuing enterprises from having to adapt or upgrade their existing systems in order to interoperate with their partners. Title: A SOFTWARE AGENT FOR CONTENT BASED RECOMMENDATIONS FOR THE WWW Author(s): Gulden Uchyigit Abstract: The evolution of the WWW has led us an explosion of information and consequentially a significant increase on usage. This avalanche effect has resulted in such uncertain environment in which we find it difficult to clarify what we want, or to find what we need. In this paper we introduce RecSys which aims to confront the problem by developing a software agent which intelligently learns users interests, and hence makes recommendations of resources on WWW based on the user’s profile. The system employs multiple TFIDF vectors to represent various domains of user’s interests. It continuously and progressively learns users profile from both implicit and explicit feedback. This is achieved by extraction and refinement of featured keywords within the Title: TRANSPARENCY IN CITIZEN-CENTRIC SERVICES - A TRACEABILITY-BASED APPROACH ON THE SEMANTIC WEB Author(s): Ivo J. Garcia dos Santos and Edmundo R. Mauro Madeira Abstract: The search for effective strategies to increase the transparency in public administration processes is becoming a key issue for governments and for the success of the new citizen-centric services and applications. Also, the application of service-oriented architectures, semantics and ontologies is gaining momentum as an alternative to fulfill the inherent e-Government interoperability and dynamism demands. Considering the challenges introduced by this new scenario, this paper contributes proposing an approach to monitor and audit composite e-Government services. The solution is based on a set of Traceability Policies modeled and implemented over a semantically-enriched and service-oriented middleware (CoGPlat). An example and the strategy implementation issues are also discussed. Title: A DISTRIBUTED SOFTWARE ENVIRONMENT FOR COLLABORATIVE WEB COMPUTING Author(s): Antonio Pintus, Raffaella Sanna and Stefano Sanna Abstract: This paper describes the extensible core software element of a distributed, collaborative, peer-to-peer system, which provides several facilities for a multi-channel Web computing, distributed information storing and retrieval and Internet collaborative applications, like Search Engines and event delivery systems. Moreover, after an architectural introduction of the core distributed software module, the Core Node, this paper briefly describes a real application, based on it and designed and implemented within the DART (Distributed Agent-based Retrieval Tools) project, including mobile devices integration use cases. Title: APPLICATION OF KNOWLEDGE HUB AND RFID TECHNOLOGY IN AUDITING AND TRACKING OF PLASTERBOARD FOR ENVIRONMENT RECYCLING AND WASTE DISPOSAL Author(s): A. S. Atkins, L. Zhang, H. Yu and B. P Naylor Abstract: The traditional disposal of waste in landfill sites is causing serious environmental concerns due to the amount of hectares consumed by this method and the waste of resources which results from not being proactive in effective recycling. The construction industry, for example, contributes approximately 108mt of the total 335mt of waste annually produced in the United Kingdom of which 50-80% could be reusable or recyclable. Construction waste is composed of at least 1mt of plasterboard waste which has resulted in serious problems because of the emission of hydrogen sulphide gas which is odorous and causes a health hazard to people living near disposal sites. Consequently, this material can only be disposed of in licensed and special designed sites. Plasterboard can be recycled and some countries, for example Japan, Canada, United States and, recently, the United Kingdom, are using this technology to obviate the issue of landfill. This paper outlines knowledge hub using Radio Frequency Identification (RFID) linked to Computer Aided Design (CAD) systems for providing auditing and monitoring systems for environmental recycling and/or licensed disposal of plasterboard waste. Title: A GENERAL SYSTEM FOR MONITORING AND CONTROLLING VIA INTERNET Author(s): Adrián Peñate Sánchez, Ignacio Solinis Camalich, Alexis Quesada Arencibia and José Carlos Rodríguez Rodríguez Abstract: The main aim of the project, presented in this paper, is to create a system that will give a solution to the variety of situations we can encounter in industrial plants control; to achieve this goal we have given the system different automation, remote control and remote monitoring functionalities. Industrial installations can be composed of several plants; for this reason we offer a solution capable of defining a central office that is in control of all plants, each plant with its own controller. Apart from the problem of geographically disperse locations, it is also of the essence to offer automation for the tasks that take place in the plants. In order to achieve an automation system capable of giving answer to a range of situations, as wide as possible, we have designed an engine so that the user may create its own rule based system, defining himself the rules by means of an interface that abstracts the user from the formal definition of the rules. The system is built on Java Enterprise Edition 5, an industry standard platform for multi-tier, distributed, fault-tolerant software development. Area 5 - Human-Computer Interaction Title: ASSESSING THE PROGRESS OF IMPLEMENTING WEB ACCESSIBILITY - AN IRISH CASE STUDY Author(s): Vivienne Trulock and Richard Hetherington Abstract: In this paper we attempt to gauge the implementation of web accessibility guidelines in a range of Irish websites by undertaking a follow-up study in 2005 to one conducted by McMullin three years earlier (McMullin, 2002). Automatic testing against version 1.0 of the Web Content Accessibility Guidelines (WCAG 1.0) using WebXact online revealed that accessibility levels had increased among the 152 sites sampled over the three-year period. Compliancy levels of A, AA and AAA had risen from the 2002 levels of 6.3%, 0% and 0% respectively to 36.2%, 8.6% and 3.3% in 2005. However, manual checks on the same sites indicated that the actual compliance levels for 2005 were 1.3%, 0% and 0% for A, AA and AAA. Of the sites claiming accessibility, either by displaying a W3C or ‘Bobby’ compliance logo, or in text on their accessibility statement page, 60% claimed a higher level than the automatic testing results indicated. When these sites were further manually checked it was found that all of them claimed a higher level of accessibility compliance than was actually the case. As most sites in the sample were not compliant with the WCAG 1.0 for the entire set of disabilities, the concept of ‘partial accessibility’ was examined by identifying those websites that complied with subsets of the guidelines particular to different disabilities. Some disability types fared worse than others. In particular blindness, mobility impairment and cognitive impairment each had full support from at most 1% of the websites in the study. Other disabilities were better supported, including partially-sighted, deaf and hearing impaired, and colour blind, where compliance was found in 11%, 23% and 32% of the websites, respectively. Title: A DEVELOPMENT PROCESS FOR WEB GEOGRAPHIC INFORMATION SYSTEM - A CASE OF STUDY Author(s): María José Escalona, Arturo Torres-Zenteno, Javier Gutierrez, Eliane Martins, Ricardo da S. Torres and M. Cecilia C. Baranauskas Abstract: This paper introduces a process for developing Web GIS (Geographic Information Systems) applications. This process integrates the NDT (Navigational Development Techniques) approach with some of the Organizational Semiotic models. A proposal is applied over a real system. The result is WebMaps; this system is a Web GIS system whose main goal is to support harvest planning in Brazil. Title: A CYBER ORGANIZATION IN THE CYBER WORLD - ICT AND E.TOTAL RELATIONSHIP MANAGEMENT (E.TRM) Author(s): Mosad Zineldin and Valiantsina Vasicheva Abstract: Purpose- This paper is part of a long-term research effort, the ultimate objective of which is to offer suggestions to integrate computer-based technology (CBT) including information and communication technology (ICT) and inter and intra- organizational functions and relationships from a holistic e.TRM paradigm embedding the physical and cyber worlds. Design/methodology/approach- It is a conceptual study based on the recent developments of ICT, inter-firm relations: economic (transaction costs economics), socio-psychological (social exchange; inter-organization; and industrial network) as well as relationship management and marketing theories and concepts with application in practice. Findings- A new concept and model of Cybernization has been developed and discussed. Some general propositions are presented and some synergy effects of utilizing the e.TRM are highlighted. The paper suggests how these approaches can add impetus to successful management issues, as a powerful competitive weapon in connecting the physical world with the cyber space. Research Limitation/Implications- The area of cybernization of an organization is so vast that it is impossible to get to the desired level of detail regarding every aspect of the same even at a conceptual level. There is a need to operationlize the new developed concepts. It is hoped, however, that the model and the ideas presented here will serve as a useful starting point for several related discussions and research. Originality/Value- This paper provides a model for Cybernity that accommodates the major manifestations of cybernization of an organization and could potentially provide the means to link diverse literature that is available in this area. It does so by proposing that it is important to recognize the direction and corner stones of cybernity. It highlights the need to delineate the operational and tactical issues relating to cybernity from the strategic use of cyberspace. About the Author: Mosad Zineldin is Professor of Economics, Strategic Management and Marketing, served as a Chairman of the Marketing Department at the School of Management and Economics, Växjö University, Sweden. He taught at the School of Business, Stockholm University for many years. The author is also engaged in a considerable number of research and consulting activities. He has participated in different international conferences as a presenter and a keynote speaker and has written several books and numerous articles. His latest book TRM: Total Relationship Management, (2000) is the first book in the world to outline the framework of relationship management from a holistic totality and multifunctional perspective. His articles have appeared in European Journal of Marketing, International Journal of Bank Marketing for the financial service sector, Supply Chain Management, Journal of Marketing Intelligence & Planning, Management Decision Journal, International Journal of Physical Distribution & Logistics Management, Journal of Consumer Marketing, European Business Review, Managerial Auditing Journal, TQM Magazine and Journal of Health Care Quality .Assurance Some of his articles have been cited with the Highest quality rating by ANBAR Electronic Intelligence and others positioned in the top 10 list by Emerald's readers and reviewers. Zineldin’s paper “The Royalty of Loyalty: CRM quality and retention” has been selected as Outstanding Paper and a Highly Commended Winner at the Emerald Literati Network Awards for Excellence 2007 Title: LEARNING OBJECT REENGINEERING BASED ON PRINCIPLES FOR USABLE USER INTERFACE DESIGN Author(s): Robertas Damasevicius and Lina Tankeleviciene Abstract: We analyze the problem of reengineering of Learning Objects (LO) for web-based education. Such reengineering must be based on sound methodological background and design principles. We apply methods adopted from software engineering domain for redesigning the structure and user interface of LO and aim both at usability and accessibility of learning material. We evaluate usability of LO from the user interface point of view, following the user interface development principles that are common both for Human-Computer Interaction (HCI) and e-Learning domains. We propose the LO reengineering framework based on user interface usability principles. In a case study, we demonstrate how these principles and recommendations can be used to reengineer a LO to improve its learnability, understandability and usability in general. Title: NATURAL LANGUAGE INTERACTION BASED ON AUTOMATICALLY GENERATED CONCEPTUAL MODELS Author(s): Diana Pérez-Marín, Ismael Pascual-Nieto and Pilar Rodríguez Marín Abstract: In this paper, we present a new form of interaction between students and free-text scoring tools based on the use of automatically generated conceptual models. Traditionally, students have worked with free-text scoring tools by typing free-text answers to the open-ended questions shown on the system's interface. Students could not personalize the aspect of the interface or visually acknowledge the level of progress they have made after answering the questions. In contrast, with this new form of interaction they are able to input natural language text and look at their generated conceptual model, which can be defined as a network of concepts and the relationships among them. In the conceptual model, each node has a background colour that indicates how well it has been understood by the student. The conceptual model can be represented in several formats such as concept maps, tables, charts, diagrams or textual summaries. The results of two experiments carried out with a group of students and teachers show how they like this new possibility. Title: A COOPERATIVE METHOD FOR SYSTEM DEVELOPMENT AND MAINTENANCE USING WORKFLOW TECHNOLOGIES Author(s): J. L. Leiva, J. L. Caro, A. Guevara and M. A. Arenas Abstract: Reverse engineering has arisen as a fundamental alternative in all reengineering processes. Its objective is to recover design specifications and workflows (WF) to construct a representation of the system with a high degree of abstraction. This paper describes the basic aspects of the EXINUS tool, enabling the generation of process specifications and user interfaces in an organisation or business. The main advantage is the possibility of modelling specifications of both the organisation’s current status and new methods generated in the system. We also propose a cooperative work system in which users participate in system development, using the advantages of the proposed tool. This methodology provides a high degree of reliability in the development of the new system, creating competitive advantages for the organisation by reducing times and costs in the generation of the information system (IS). Title: CULTURE SENSITIVE EDUCATIONAL GAMES CONSIDERING COMMON SENSE KNOWLEDGE Author(s): Junia Anacleto, Eliane Pereira, Alexandre Ferreira, Ap. Fabiano P. de Carvalho and João Fabro Abstract: When a 8th grade science teacher discuss the subject “contraceptive methods”, s/he has to consider situations and facts that are known by teenagers to better understand their behavior and define his/her approach. Suppose that the teacher says “the rithym method is not one of the most efficient contraceptive methods”. But does the teacher really know which contraceptive methods are considered by that group of students? It is proposed here a framework to instance web game supported by common sense knowledge to approach the called “transversal themes” of the school curriculum in our country, like sexual education, ethic and healthcare. The quiz game framework called “What is it?” is presented as a support for teachers in contextualizing the content to the students’ local culture, promoting a more effective and significant learning. Title: E-MAIL VISUALISATION - A COMPARATIVE USABILITY EVALUATION Author(s): Saad Alharbi and Dimitrios Rigas Abstract: As the number of e-mail accounts and messages grow rapidly, traditional e-mail clients that are used nowadays have become difficult to use. Therefore, this paper shows how the usability of e-mail clients can be improved using information visualisation. An experimental e-mail visualisation prototype was developed in order to organise e-mail messages in the inbox. It visualises messages based on the date of receiving messages with senders’ e-mail addresses. An experiment was carried out to test whether information visualisation could significantly enhance the usability of e-mail clients. The performance of 30 participants was observed in a standard e-mail client and the proposed prototype. The results showed that information visualisation could significantly improve the effectiveness and the efficiency of e-mail clients. Title: THE USE OF 3D VISUALISATION AND INTERACTION TO MARKET A NEW NEAR-NET-SHAPE PIN TOOLING APPLICATION Author(s): Kevin Badni Abstract: This paper describes the methodologies used to create an interactive 3D multimedia application. The application was required to fulfil a business need of representing as close to reality as possible, how a new product technology worked. A number of IT solutions for the application were researched and described within this paper. The construction of the application is then discussed with the methodology behind a stereolithography parsing system and the algorithms used to detect collisions being described in detail. The application has been successfully implemented by the business to build an engaged client basis. Title: WISDOM ON THE WEB: ON TRUST, INSTITUTION AND SYMBOLISMS - A PRELIMINARY INVESTIGATION Author(s): Emma Nuraihan Mior Ibrahim, Nor Laila Md. Noor and Shafie Mehad Abstract: Trust in W-MIE is fairly new and risks associated with it are novel to users. Consequently, the question on how to design technological artefact, in this case the information that is perceived trustworthy which can be understood, rationalized and control as part of the interface design strategy is not well understood. This becomes our primary aim of this research. We seek to explicate the role of trust from the explicit institutional theory and semiotic paradigm to maximise the ‘goodness of fit’ for future construction of sensitive information system within a culture or domain through the analysis of its social context, pragmatic and semantic levels of signification. We contend that institutional design features could make the alignment between formal and informal signs of trust to match their meanings through shared norms, assumptions, beliefs, perceptions and actions. In this preliminary study, we used card sorting to explore on users trust perception of institutional signs operationalized in web based information for Islamic content sharing sites. These institutional signs are conceptualized under the four dimensions of institutional symbolisms: content credibility, emotional assurance, brand/reputation and trusted third party. The results were cross referenced with the initial framework for its similarities and differences. Title: GENERIC STRATEGIES FOR MANIPULATING GRAPHICAL INTERACTION OBJECTS: AUGMENTING, EXPANDING AND INTEGRATING COMPONENTS Author(s): D. Akoumianakis, G. Milolidakis, D. Kotsalis and G. Vellis Abstract: This paper presents the notion of (user interface development) platform administration and argues for its increasing importance in the context of modern interactive applications. Platform administration entails strategies for manipulating diverse interaction components. Four such strategies are elaborated – namely augmentation, expansion, integration and abstraction – which collectively constitute the ingredients of a platform administration process. The paper describes both the rationale for these strategies in the context of user interface development and their implementation details, as currently realized in an ongoing R&D project Title: DIAMOUSES - AN EXPERIMENTAL PLATFORM FOR NETWORK-BASED COLLABORATIVE MUSICAL INTERACTIONS Author(s): Chrisoula Alexandraki, Panayotis Koutlemanis, Petros Gasteratos, Chrisoula Alexandraki, Panayotis Koutlemanis and Petros Gasteratos Abstract: DIAMOUSES is an on-going research and development project aiming to establish a platform supporting distributed and collaborative music performance. DIAMOUSES is designed to host a variety of application scenarios, including music rehearsal, music improvisation and musical education, thus offering a tailorable enterprise-wide solution for organizations offering network-based music performance services. This paper presents the architectural underpinnings of DIAMOUSES and elaborates on the set-up and results of a recent pilot experiment in music rehearsal. Title: APPLYING KANSEI ENGINEERING TO DETERMINE EMOTIONAL SIGNATURE OF ONLINE CLOTHING WEBSITES Author(s): M. N. Nor Laila, M. L. Anitwati and Mitsuo Nagamachi Abstract: In the discipline of design science, the integration of cognitive, semantic and affective elements is crucial in the conception and development of designed products. IT artefact design and development has ignored the importance of affective elements until recent years. There is now a growing interest in addressing affective elements of system design within the HCI community. Current literature reflects two main foci in the area: emotional design and its evaluation. Of the two, the later is widely researched and reported. In our paper, we present our research attempt to establish the design method for organizing the emotional design requirements of E-commerce websites by applying Kansei engineering (KE). We proposed a Kansei website design method and demonstrated the method by conducting the semantic evaluation of pre-selected online clothing websites using 40 Kansei words as descriptors of emotional sensation which was organized as a 5-point Semantic Differential (SD) scale to form the Kansei checklist. 120 participants were asked to rate 35 pre-selected online clothing websites using the Kansei checklist. Cluster analysis and partial least square method were then performed to identify the Kansei word cluster and from this result we uncover the relationship between Kansei word cluster and online clothing website design. Title: MULTICHANNEL EMOTION ASSESSMENT FRAMEWORK - GENDER AND HIGH-FREQUENCY ELECTROENCEPHALOGRAPHY AS KEY-FACTORS Author(s): Jorge Teixeira, Vasco Vinhas, Eugenio Oliveira and Luis Paulo Reis Abstract: While affective computing and the entertainment industry still maintain a substantial gap between themselves, biosignals are subject of digital acquisition through low budget technologic solutions at neglectable invasive levels preventing users from focusing their awareness in the equipment. The integration of electroencephalography, galvanic skin response and oximeter in a multichannel framework constitutes an effort in the path to identify emotional states via biosignals expression. In order to induce and detect specific emotions, gender specific sessions were defined based on the International Affective Picture System and performed in a controlled environment. Data provided was collected and visualized in real-time by the session instructor and stored for posterior processing and analysis. Results granted by distinct analysis techniques showed that high frequency EEG waves are strongly related to emotions and are a solid ground to perform accurate emotion classification. They have also given strong indications that females are more sensitive to emotion induction. On the other hand, one might conclude that the attained success levels concerning relating emotions to biosignals are extremely encouraging not only to the continuation of this research topic but also to the application of these results in domains such as multimedia entertainment, advertising and medical treatments. Title: ENABLING END USERS TO PROACTIVELY TAILOR UNDERSPECIFIED, HUMAN-CENTRIC BUSINESS PROCESSES - “PROGRAMMING BY EXAMPLE” OF WEAKLY-STRUCTURED PROCESS MODELS Author(s): Todor Stoitsev, Stefan Scheidl, Felix Flentge and Max Mühlhäuser Abstract: Enterprises face the challenge of managing underspecified, human-centric business processes, which are executed in distributed teams in a rather informal, ad-hoc manner. This gave hibernating CSCW and ad-hoc workflow research a new push recently. However, there is still the need to clearly perceive end users as the actual drivers of business processes and to enable them to proactively tailor these processes according to their actual expertise and problem solving strategies. This paper presents the design and evaluation of a prototype for end-user development of weakly-structured process models through email-integrated task management. The presented CTM (Collaborative Task Manager) prototype uses “programming by example” to leverage user experience with standard email and task management applications and to extend user skills towards the definition of reusable process structures. By closely correlating to the actual user work practices and software environment, the tool provides a “gentle slope of complexity” for end users engaging in process tailoring activities. Title: FORM INPUT VALIDATION - AN EMPIRICAL STUDY ON IRISH CORPORATE WEBSITES Author(s): Mary Levis, Markus Helfert and Malcolm Brady Abstract: The information maintained about products, services and customers is a most valuable organisational asset. Therefore, it is important for successful electronic business to have high quality websites. A website must however, do more than just look attractive it must be usable and present useful, usable information. Usability essentially means that the website is intuitive and allows visitors to find what they are looking for quickly and without effort. This means careful consideration of the structure of information and navigational design. According to the Open Web Applications Security Project, invalidated input is one of the top ten critical web-application security vulnerabilities. We empirically tested Twenty one Irish Corporate Website. The findings suggested that one of the biggest problems is that many failed to use mechanisms to validate even the basic user data input at the source of collection to validate user input in order to ensure reliability and therefore potentially resulted in a database full of useless information. Title: A PERSONALIZED RECOMMENDER SYSTEM FOR WRITING IN THE INTERNET AGE Author(s): M. C. Puerta Melguizo, O. Muñoz Ramos, T. Bogers, L. Boves and A. van den Bosch Abstract: Several computer systems have been developed in order to support writing. Most of these systems, however, are mainly designed with the purpose of supporting the processes of planning, organizing and connecting ideas. In general, these systems help writers to formulate external visual representations of their ideas and connections of the main topics that should be addressed in the paper, sequence of the sections, etc. Because with the advent of the world wide web, writing, researching and finding information to plan and structure the text become increasingly intertwined, we think that it is also necessary to develop systems able to support the task of finding relevant information, without interfering with the writing process. The Proactive Recommender System À Propos is being developed in order to support writers in finding relevant information during writing. We present our research findings and raise the question whether the tendency to interleave (re)search and writing implies a need for developing more comprehensive models of the cognitive processes involved in writing scientific and policy papers. Title: TTLS: A GROUPED DISPLAY OF SEARCH RESULTS BASED ON ORGANIZATIONAL TAXONOMY USING THE LCC&K INTERFACE Author(s): Vicki Mordechai, Ariel J. Frank and Offer Drori Abstract: One of the major problems in the process of Information Retrieval (IR) arises at the stage where the user reviews the results list. This paper presents the latest research in a series of research works that aims at finding the most vital information components, within a list of search results, so as to assist the user in high-quality decision making as to which of the resulting documents are included within the sought after results of the search task We propose here a new model for displaying the results named TTLS (Taxonomy Tree & LCC&K Snippet). The experimentation setup included execution of different search tasks by a group of 60 participants. The tasks were performed via the BASE and TTLS interfaces. From the resulting times comparison it is clear that the execution times of tasks done via the TTLS interface is shorter that those done via the BASE interface. It can be seen that in the BASE interface it was needed to open more documents in order to locate the relevant information than in the TTLS interface. It turns out that the majority of users (77%) prefer to use the TTLS interface. Title: THE IMPORTANCE OF USABILITY CRITERIA ON LEARNING MANAGEMENT SYSTEMS: LESSONS LEARNED Author(s): Aparecido Fabiano Pinatti de Carvalho and Junia Coutinho Anacleto Abstract: This paper points to the importance of guaranteeing usability in Learning Management Systems (LMS) in order to achieve success in performing learning tasks in this kind of environment. The paper presents the results from a case study in which learners had to perform several learning tasks on the TIDIA-Ae LMS. It is presented the usability problems observed during the performance of the case study’s learning tasks, as well as the difficulties with which the learners faced in some tasks due to those problems and the actions taken in order to make viable the learning activity’s tasks execution. By this paper, it is intended (i) to call LMS developers’ attention to the importance of usability in the development of tools to support the learning process; (ii) to point to interactive problems which might be avoided in LMS systems and (iii) to show some problems which can appear during a learning activity execution supported by computer and possible ways to deal with them. Title: A CONSIDERATION METHOD OF INFORMATION CONTENT TO BE APPLIED FOR THE DEMENTIA SITUATION AND THE “YUBITSUKYI” SYSTEM Author(s): Masahiro Aruga, Shuichiro Ono and Shuichi Kato Abstract: The communication among the blind deaf persons and others became more easy than before using the “YUBITSUKYI” system. When they become dementia it is estimated that the “YUBITSUKYI” system shows some information processing signal of the situation corresponding to the dementia situation. Therefore, it is important that the structure model of information process and information content applied to this situation are made clear. In this paper, firstly the outline of the “YUBITSUKYI” system working is described and the necessity of analysis of information structure of such communicating situation is shown. As a result, it is described that it is important for the new information content different from the Shannon’s ordinary information content to be introduced into the analysis of information process of dementia and its communicating structure with the “YUBITSUKYI” system, and an example of consideration with regard to the elements of the new information content is proposed on the basis of discussing Peirce’s semiotic and other concepts of some information elements Title: FLEXIBILITY, COMPLETENESS AND SOUNDNESS OF USER INTERFACES - TOWARDS A FRAMEWORK FOR LOGICAL EXAMINATION OF USABILITY DESIGN PRINCIPLES Author(s): Steinar Kristoffersen Abstract: Models are often seen as context-free abstractions that make translation into their next step of refinement more efficient, at the same time as formal reasoning about their properties can detect and correct errors early in the process. Assessing this strategy for usability design, this paper proposes a broad set of novel concepts and explications in a framework for logical reasoning about the properties of an interactive system, as seen from the user perspecive. The discussion is based on well-known principles of usability. The objective is to lay the ground, albeit still rather informally, of a formal program of assessing the usability of an interactive system using formal methods. Further research can then extent this into a strong algebra of interactive systems. Title: USABILITY CHALLENGES IN EMERGING MARKETS Author(s): Maryam Aziz, Chris Riley and Graham Johnson Abstract: Understanding emerging cultures and adapting new services accordingly is one of the biggest challenges faced by modern businesses today. Several user-centred approaches are employed during the life-cycle of service adoptation. These approaches mainly involve design and evaluation of a service based on specific requirements of the target market. This paper describes an on-going multi-method case study which involves the evaluation of a self-service ATM system with a fingerprint sensor used for identification purposes as based in the financial sector of Pakistan. The paper is positioned to assess the validity of traditional usability evaluation methods in the context of emerging markets. These methods include one-on-one observations, in-depth interviews and sensor performance data analysis. The methods ensure both objective and subjective assessment of sensor use throughout. However, several difficulties faced with sensor evaluation such as participant recruitment, lack of participants’ response and the impact on the local culture’s user attitude towards sensor use are discussed. The paper also presents the preliminary findings and draws implications for both researchers and practitioners based on our experience. Title: COMPARING COLOR AND TEXTURE-BASED ALGORITHMS FOR HUMAN SKIN DETECTION Author(s): A. Conci, E. Nunes, J. J. Pantrigo and A. Sánchez Abstract: Locating skin pixels in images or video sequences where people appear has many applications, specially those related to Human-Computer Interaction. Most work on skin detection is based on modelling the skin on different color spaces. This paper explores the use of texture as a descriptor for the extraction of skin pixels in images. For this aim, we analyzed and compared a proposed color-based skin detection algorithm (using RGB, HSV and YCbCr representation spaces) with a texture-based skin detection algorithm which uses a measure called Spectral Variation Coefficient (SVC) to evaluate region features. We showed the usefulness of each skin segmentation feature (color versus texture) under different experiments that compared the accuracy of both approaches (i.e. color and texture) under the same set of hand segmented images. Title: MOTIVATING SOFTWARE ENGINEERS - A THEORETICALLY REFLECTIVE MODEL Author(s): Nathan Baddoo, Sarah Beecham, Tracy Hall, Hugh Robinson and Helen Sharp Abstract: We present a model of motivation for software engineers. This model suggests that software engineers are motivated by two sets of factors, intrinsic and extrinsic motivators, where a subset of intrinsic motivators are aspects inherent in the job that software engineers do. It shows that software engineers are orientated towards these particular set of motivators because of their characteristics, which in turn are mediated by individual personality traits and environmental factors. This model shows that the external outcome of software engineers’ motivation are benefits like staff retention, increased productivity and reduced absenteeism. This model is derived from a Systematic Literature Review of motivation in software engineering. We have constructed this model by engaging in practices that reflect good principles of model building as prescribed by operational research and scientific management discipline. We evaluate our model for theoretical efficacy and show that our model, in comparison to other attempts at modeling software engineers’ motivation, reflects a wide range of the classic concepts that underpin the subject area of motivation. We argue that, this theoretical efficacy validates the model and therefore improves confidence in its use. We suggest that our model serves as a valuable starting point for managers wanting to understand how to get the best out of software engineers, and individuals wanting to understand their own motivation or who are embarking on career choice. Title: EFFECTIVENESS AND PREFERENCES OF ANTHROPOMORPHIC USER INTERFACE FEEDBACK IN A PC BUILDING CONTEXT AND COGNITIVE LOAD Author(s): Pietro Murano, Christopher Ede and Patrik O’Brian Holt Abstract: This paper describes an experiment and its results concerning research that has been going on for a number of years in the area of anthropomorphic user interface feedback. The main aims of the research have been to examine the effectiveness and user satisfaction of anthropomorphic feedback in various domains. The results are of use to all interactive systems designers, particularly when dealing with issues of user interface feedback design. Currently the work in the area of anthropomorphic feedback does not have any global conclusions concerning its effectiveness and user satisfaction capabilities. This research is investigating finding a way for reaching some global conclusions concerning this type of feedback. The experiment detailed, concerns the specific software domain of software for in-depth learning in the specific context of PC building. Anthropomorphic feedback was compared against an equivalent non-anthropomorphic feedback. The results were not statistically significant to suggest one type of feedback was better than the other. It was also the aim to examine the types of feedback in relation to Cognitive Load Theory. The results suggest that the feedback types did not negatively affect Cognitive Load. Title: E-RETAIL: INTERACTION OF INTELLIGENT SELLING SPACE WITH PERSONAL SELLING ASSISTANT Author(s): Alain Derycke, Thomas Vantroys, Benjamin Barby and Philippe Laporte Abstract: With the availability of nomadic computing, and its new interaction user devices connected through wireless networks, it is obvious that the traditional way of delivering commerce will evolve towards “pervasive-commerce”. This paper presents our approach based on Intelligent Selling Space to augment interaction in store department, and with the seller equipped with a personal assistant. For that purpose, we defined interaction patterns and a generic infrastructure based on OSGi and UPnP. Our approach is currently evaluated in a hypermarket. Title: FACTORS INFLUENCING THE LEARNING PERFORMANCE OF U-LEARNING SYSTEMS Author(s): SangHee Lee and DongMan Lee Abstract: This study, examines the factors that are associated with user satisfaction of u-Learning, where in four major factors are identified that influence interaction and learning performance. These factors are pervasive connectivity, context awareness, academic motivation, and flow. A survey of 226 u-Learning users was conducted and the data collected was used to test theoretically expected relationships. To verify our research model, we inspected its validity through factor and reliability analyses. The results of the analyses, by LISREL, are as follows. (1) Ubiquitous characteristics such as pervasive connectivity and context awareness had significant influence on the effectiveness of u-Learning systems. (2) Learner's characteristics such as academic motivation and flow played an important role in the effectiveness of u-Learning systems. (3) Learner's interaction factors had an important influence regarding the performance of u-Learning systems. Title: PERSONAL AND SOCIAL INFORMATION MANAGEMENT WITH OPNTAG Author(s): Lee Iverson, Maryam Najafian Razavi and Vanesa Mirzaee Abstract: We examine the principles of personal information management in a social context and introduce OpnTag, an open source web application for note taking and book marking developed to experiment with these principles. We present the design motivation and technical structure of OpnTag, along with a discussion of how it supports our design philosophy. We also describe a few examples of how it is actually used, how this usage has improved our understanding of Social-Personal information management (SPIM) principles, and our plans for future enhancements. Title: WEBTIE: A FRAMEWORK FOR DELIVERING WEB BASED TRAINING FOR SMES Author(s): Parveen K. Samra, Richard Gatward, Anne James and Virginia King Abstract: The need for training in every industry sector is imperative to equip companies for sustained competitive advantage (Pedler, Boydell and Burgoyne, 1998). In the light of globalisation, the Manufacturing industry now acknowledges the need for training (Khan, Bali & Wickramasinghe, 2007). However, with a ‘myopic’ view of strategy, operational demands and resource constraints, the ability to take on board the level of training evident in larger organisations puts SMEs at a tremendous disadvantage (Mazzarol, 2004:1). A yearlong pilot project undertaken in the UK set about delivering online training to 100 SMEs, which equated to 500 employees all within manufacturing. The Cawskills Project delivered online IT training directly to employees. The findings from the project have informed the development of a generic but adaptive model for SMEs to facilitate the need for training by considering the operational demands, resources constraints and infrastructure. The model brings together principles of teaching and learning practices evident in classroom based education along with the learning requirements of SMEs and employees. The incorporation of online learning aims to deliver training content, which is Just in Time (JIT) and proposes training events that are embedded in SME’s strategic direction. Title: SUPPORTING UNSTRUCTURED WORK ACTIVITIES IN EMERGENT WORK PROCESSES Author(s): Cláudio Sapateiro and Pedro Antunes Abstract: When existing information systems and organizational procedures lacks to support work needs, people engage in informal networks of relations and make use of their tacit knowledge promoting this way the emergence of unstructured work activities. To improve the consistency and effectiveness of such practices we propose a model and a prototype to assist collaboration needs in such scenarios. Our contribution defends the need of the construction of a shared awareness to improve situation understanding and collaboration. Supported on the Reason’s Swiss Cheese model for accidents we propose the use of a collaborative constructed artifact: Situation Matrixes (SM), to relate the different situation dimensions. The information needs in the existing contexts of action where the situation unfolds, will be supplied by different views over the (sub)set of matrixes. Title: IMPROVING ORGANIZATIONAL COLLABORATION BASED ON DOCUMENT TAGGING CONCEPT Author(s): Cláudio Sapateiro, Bruno Vilhena and Pedro Moura Abstract: During work processes many collaboration structures rely on documents sharing to feed information needs. To this share be effective involved parties have also to share some common understanding of the meanings of the information that is being exchanged. Personal and communities information and knowledge structures are highly implicit and tends to be invisible to others. In this work we propose a classification of the user’s personal electronic assets inspired in the evolving web 2.0 applications concepts: tagging and social bookmarking. By adopting this approach we may profile users/communities knowledge domains and by externalizing this information improve collaboration structures. Title: COMMUNICATION-BASED MODELLING AND INSPECTION IN CRITICAL SYSTEMS Author(s): Marcos Salenko Guimarães, M. Cecilia C. Baranauskas and Eliane Martins Abstract: Critical systems are software-based systems whose failure would provoke catastrophic or unacceptable consequences for human life. In avionic systems we have seen some significant evolution related to the aircraft cockpits. The Personal Air Vehicle (PAV) represents a new generation of small aircrafts being conceived to extend personal air travel to a much larger segment of the population proposing new concepts of interaction and communication in aviation. In this domain, communication is a critical factor especially among the users while running the system through its interfaces. This paper presents a technique for modelling and inspecting communication in the user interface of the avionics domain; a case study illustrates the proposal for artefacts of the PAV domain. Title: MICROFORMATS BASED NAVIGATION ASSISTANT - A NON-INTRUSIVE RECOMMENDER AGENT: DESIGN AND IMPLEMENTATION Author(s): Anca-Paula Luca and Sabin C. Buraga Abstract: The multiple ways in which we rely on the information available on the web to solve increasingly more tasks encountered in day-to-day life has led to the question whether machines can help us parse the amounts of data and bring the interesting closer to us. This kind of activity, most often, requires machines to understand human defined semantics which, fortunately, can be easily done in today’s web through semantic markup. The purpose of the proposed project is to build a flexible tool that understands the behaviour of a user on the web and filters out the irrelevant data, presenting to the user only the information he/she is most interested in, while being as discreet as possible: the user is required no preference settings, no explicit feedback. Title: COGNITIVE MODELING OF INTERACTIONS DURING A NEGOTIATION PROCESS Author(s): Charlène Floch, Nathalie Chaignaud and Alexandre Pauchet Abstract: This article presents a study on human negotiations in order to improve the BDIGGY model of agent with negotiation skills. This study is based on logs of real negotiation rounds, obtained through a psychological experiment. We propose an utterance model including performatives applied on mental states and a dialog model using timed automata, both based on the BDI concepts. Title: MODELING HUMAN INTERACTION TO DESIGN A HUMAN-COMPUTER DIALOG SYSTEM Author(s): A. Loisel, N. Chaignaud and J.-Ph. Kotowicz Abstract: This article presents the Cogni-CISMeF project, which aims at improving medical information search in the CISMeF system by including a conversational agent that interacts with the user in natural language. To study the cognitive processes involved during the information search, a bottom-up methodology was adopted. An experiment has been set up to obtain human dialogs related to medical information search. The analysis of these dialogs underlines the establishment of a common ground and accommodation effects to the user. A model of artificial agent is proposed, that guides the user in its information search by proposing examples, assistance and choices. Title: CULTURAL DIFFERENCES BETWEEN TAIWANESE AND GERMAN WEB USER - CHALLENGES FOR INTERCULTURAL USER TESTING Author(s): Anna Karen Schmitz, Thomas Mandl and Christa Womser-Hacker Abstract: Measuring the performance of a user with a web site reveals that the culture of a user is an important factor. A test of Taiwanese and German students resulted in various significant differences in effectiveness, efficiency and satisfaction. Task design and results of the experiment are presented. Human-computer interaction can be evaluated by different means. The paper discusses how methods need to be interpreted in international user test settings. For example, time might not be a valid measure in long-term oriented cultures for all interaction tasks Title: COMPLEX USER BEHAVIORAL NETWORKS AT ENTERPRISE INFORMATION SYSTEMS Author(s): Peter Géczy, Noriaki Izumi, Shotaro Akaho and Kôiti Hasida Abstract: We analyze human behavior on a large-scale enterprise information system. Employing a novel framework that efficiently captures complex spatiotemporal dimensions of human dynamics in electronic spaces we present vital findings about knowledge workers' behavior on enterprise intranet portal. Browsing behavior of knowledge workers resembles a complex network with significant concentration on navigational starters. Common browsing strategy utilizes the knowledge of the starting navigation point and recollection of the traversal pathway to the target. Complex traversal network topology has a small number of behavioral hubs concentrating and disseminating the browsing pathways. Human browsing network topology, however, does not match the link topology of the web environment. Knowledge workers generally underutilize the available resources, have focused interests, and exhibit diminutive exploratory behavior. Title: ADAPTED AND CONTEXTUAL INFORMATION IN MEDICAL INFORMATION SYSTEMS Author(s): Karine Abbas, André Flory and Christine Verdier Abstract: Today, due to the integration of the pervasive computing to current information systems, new challenges are involved: the amount of data increases tremendously over time and users are more and more heterogeneous with different needs and roles. These challenges require systems with some adaptation capacities. The personalization and user modelling process are the key elements to propose solutions to these challenges. On the one hand, it allows providing to user relevant and adapted information. On the other hand, it reduces considerably the cognitive data overload. For that, this paper presents personalized access technique which takes into account the different requirements of personalization for different user’s needs and different contextual situations. This technique requires two steps: the former consists in building profile model and context model. The latter uses these models for personalized semi-structured data. Title: PROTOCOL MODELS OF HUMAN-COMPUTER INTERACTION Author(s): Ashley McNeile, Ella Roubtsova and Gerrit van der Veer Abstract: Conventional approaches to modeling human-computer interaction do not always succeed in producing behaviorally complete models of manageable size. We argue that the reason for this weakness lies in lack of support for composition, and propose the recently developed \emph{Protocol Modeling} approach as an alternative that overcomes these problems. The semantics of Protocol Modeling support separation and composition of concerns in models of human-computer interaction, abstract modeling of system internals, and building executable models to explore and refine the desired behavior. We show that Protocol Modeling supports a crucial property sought in a modeling methods if it is to scale to complex problems, namely the ability to reason about the modeled behavior of the whole based on examination only of a part (sometimes called "modular" or "local" reasoning). Title: AN INTERFACE ENVIRONMENT FOR LEARNING OBJECT SEARCH AND PRE-VISUALISATION Author(s): Laura Sánchez García, Rodrigo Octávio de Oliveira Mello, Alexandre Ibrahim Direne and Marcos Sfair Sunye Abstract: Learning Objects – LOs – were devised in order to cut down on production costs and time, as well as to facilitate the distribution and reuse of didactic contents by means of a series of functions, such as reutilization, traceability, interoperability, durability and easy editing. Our main objective in the present paper is to propose an interface environment for LO search and pre-visualization. The distinctive feature of such environment is both the easy access to all search refinement functions available, thus allowing users to fulfil their search objectives without requiring much cognitive effort, and the well-structured LO previsualization. We started the project by defining a set of criteria to assess LO research and solutions, and one of such works was used as our starting point for the interface environment. Our theoretical foundation relies on HCI (Human-Computer Interaction), and the major outcome of the present project is a nonfunctional prototype (in storyboard) of the interface environment proposed, which in turn solves the majority of the problems we came across when revising the pertinent literature. Title: A GUIDED INTERFACE FOR WEB INTERACTION Author(s): Juan Falgueras, Antonio Carrillo, Daniel Dianes and Antonio Guevara Abstract: The Goals Driven Interaction is an interaction style specially conceived for occasional users, i.e., those who are going to use an interactive computer science application either only on one off or sporadic occasions, as is the case of Web application users. The goal for this type of interaction is for these users to be able to use the application correctly and to carry out the desired task(s), with the minimum time necessary to be spent on learning. In this paper we describe the aforementioned interaction style and its corresponding user interfaces, and analyze how it may be used in web applications. Title: ASSISTIVE TECHNOLOGIES AND TECHNIQUES FOR WEB BASED EGOV IN DEVELOPING COUNTRIES Author(s): Heiko Hornung, M. Cecília C. Baranauskas and Claudia A. Tambascia Abstract: Electronic government (eGov) is intended to serve the whole spectrum of the population. To be able to access respective services and thus benefit from eGov, many users require assistive technologies and techniques (ATT). This demand is implied by auditory, visual or other impairments but also by low literacy skills. Therefore, in the context of this paper ATT is interpreted in a broader sense than the classic definitions of "assistive technology" orientated to impairments or other special needs. This paper explores the range of ATT on the background of the special conditions found in developing countries. We investigate which types of users benefit from ATT in which ways and discuss which categories of users have requirements that are not yet covered by the current stage of ATT. The reasons for the remaining problems regarding access may be due to factors related to the country context or due to current technological limitations. Some lessons learned from our findings will be presented which indicate directions for further research. Title: A 3D USER INTERFACE FOR THE SEMANTIC NAVIGATION OF WWW INFORMATION Author(s): Manuela Angioni, Roberto Demontis, Massimo Deriu and Franco Tuveri Abstract: The automatic creation of a conceptual knowledge map using documents coming from the Web is a very relevant problem because of the difficulty to distinguish between valid and invalid contents documents. In this paper we present an improved search engine GUI for displaying and organizing alternative views of data, by the use of a 3D graphical interface, and a method for organizing search results using a semantic approach during the storage and retrieval of information. The presented work deals with two main aspects. The first one regards the semantic aspects of knowledge management, in order to support the user during the query composition and to supply to him information strictly related to his interests. The second one argues the advantages coming from the adoption of a 3D user interface, to provide alternative views of data. Title: MUICSER: A MULTI-DISCIPLINARY USER-CENTERED SOFTWARE ENGINEERING PROCESS TO INCREASE THE OVERAL USER EXPERIENCE Author(s): Mieke Haesen, Kris Luyten, Karin Coninx, Jan Van den Bergh and Chris Raymaekers Abstract: The growing use and variety of mobile or networked computing devices requires a flexible HCI engineering process to meet the great diversity of demands. In this paper we present an incremental and user-centered process to create suitable and usable user interfaces. Validation is done throughout the process by prototyping, the prototypes evolve from low-fidelity to the final user interface. Applications developed with this process are more likely to correspond to users' expectations. Furthermore, the process takes into account the need for sustainable evolution often required by modern software configurations, by combining traditional software engineering with a user-centered approach. We think our approach is beneficial in its scope, since it considers evolving software beyond the deployment stage and supports a multi-disciplinary team. In this paper we illustrate the proposed process and artifacts using a case study. Title: A USER-INTERFACE ENVIRONMENT SOLUTION AS AN EDUCATIONAL TOOL FOR AN ONLINE CHESS SERVER ON THE WEB Author(s): Juliano Picussa, Laura S. García, Juliana Bueno, Márica V. R. Ferreira, Alexandre I. Direne, Luis C. E. de Bona, Fabiano Silva, Marcos A. Castilho and Marcos S. Sunye Abstract: This article describes an interaction and interface environment for a public online chess server, on the web, as an educational tool. The main purpose of the environment is to improve chess teaching in Brazilian public schools. The vast majority of such chess online servers considerer and take for granted that users are specialists rather than learners. The solution describes in this article is inserted in an education environment, aiming at providing users with direct access to the contextually significant actions by means of strategic and operational help. Title: DESIGNING MOBILE MULTIMODAL ARTEFACTS Author(s): Tiago Reis, Marco de Sá and Luís Carriço Abstract: Users’ characteristics and their different mobility stages sometimes reduce or eliminate their capability to perform paper-based activities. The support of such activities and their extension through the utilization of non paper-based modalities introduces new perspectives on their accomplishment. We introduce mobile multimodal artefacts and an artefact framework as a solution to this problem. We briefly explain the main tools of this framework and detail two versions of the multimodal artefact manipulation tool: a visual centred and eye-free version. The design and evaluation process of the tool is presented including several usability tests. Title: PROMOTING COMMUNICATION AND PARTICIPATION THROUGH ENACTMENTS OF INTERACTION DESIGN SOLUTIONS - A STUDY CASE FOR VALIDATING REQUIREMENTS FOR DIGITAL TV Author(s): Elizabeth Furtado, Albert Schilling, Fabrício Fava and Liadina Camargo Abstract: This paper discusses the use of theatrical techniques in two experiments to attain the following objectives of interaction design: to communicate the cross-cultural users’ needs and expectations for iDTV (interactive Digital TeleVision) services and to explore new ideas in a participatory way. These two objectives are particularly important when systems are involved that are unknown to people, and professionals need to gather the requirements of such systems. In the first experiment, we showed the implication of stories told through theater in order to communicate the purposes of iDTV services to the audience. In the second experiment, we used role-playing in participatory interaction design sessions to explore new ideas with users. The results are described by discussing the strengths and weaknesses of this approach Title: DISABILITY YOUNG CHILDREN LEARNING PROCESS SUPPORTED WITH MULTIMEDIA SOFTWARE: CASE STUDY Author(s): Micaela Esteves, Filipe Pinto, Audrey Silva and Ana Duarte Abstract: Throughout technological resources, learning methods have achieved more affective and user-friendly results than in traditional ways, even when the target audience are young children with special needs. Addressing the usability, this paper introduces software directed to young children less than 13 years old with special needs at vision and sound levels. This application intents to explain the definition, the origin and the way it works of some actual electronic devices used in their quotidian, like television, telephone, electricity or personal computer. Along this paper, all the work developed under pedagogical guidelines specially directed to young children with disability it is presented and also the analysis, programming and testing phases are explained. “Disabled persons have the inherent right to respect for their human dignity (…) whatever the origin, nature and seriousness of their handicaps and disabilities, have the same fundamental rights as their fellow-citizens….” (Declaration on the Rights of Disabled Persons, United Nations resolution December 1975): Title: WALKTHROUGH METHODS FOR IMPROVING THE SYSTEM FIT TO THE USERS’ TASKS WITHIN MANUFACTURING OPERATIONS Author(s): Taru Salmimaa, Inka Vilpola and Katri Terho Abstract: Designing an information system for manufacturing context has challenges, such as efficiency and user requirements. Therefore manufacturing systems should be evaluated with real users before their implementation. The purpose of the evaluation is to ensure that a system supports the work flows and that users are introduced to a new system in the early stages of design. Walkthrough methods provide means to simultaneously review a sequence of actions and involve the users in the design activities. In this paper, a pluralistic walkthrough method was used for evaluating a user interface of a manufacturing system. In the session, the target user groups performed predefined task scenarios with a paper prototype of the system. The results indicate that walkthrough methods could be applicable for the manufacturing systems design, and the results could improve the system design and the user acceptance. Title: OVERVIEW OF WEB CONTENT ADAPTATION Author(s): Jérémy Lardon, Mikaël Ates, Christophe Gravier and Jacques Fayolle Abstract: Nowadays Internet contents can be reached from a vast set of different devices. We can cite mobile devices (mobile phones, PDAs, smartphones) and more recently TV sets through browser-embedding Set-Top Boxes (STB). The diverse characteristics that define these devices (input, output, processing power, available bandwidth, ...) force content providers to keep as many versions as the number of targeted devices. Many research projects address all or part of this topic. In this paper we analyze the originality of our proposal by comparing it with previous research. Title: AN INTERACTIVE INFORMATION SEEKING INTERFACE FOR EXPLORATORY SEARCH Author(s): Hogun Park, Sung Hyon Myaeng, Gwan Jang, Jong-wook Choi, Sooran Jo and Hyung-chul Roh Abstract: As the Web has become a commodity, it is used for a variety of purposes and tasks that may require a great deal of cognitive efforts. However, most search engines developed for the Web provide users with only searching and browsing capabilities, leaving all the burdens of manipulating information objects to the users. In this paper, we focus on an exploratory search task and propose an underlying framework for human-Web interactions. Based on the framework, we designed and implemented a new information seeking interface that helps users to relieve cognitive burden. The new human-Web interface provides a personal workspace that can be created and manipulated cooperatively with the system, which helps the user conceptualize his information seeking tasks and record their trails for future uses. This interaction tool has been tested for its efficacy as an aid for exploratory search. Title: IMPROVING USER SATISFACTION IN THE POST-IMPLEMENTATION PHASE OF A LARGE-SCALE INFORMATION SYSTEM Author(s): Colman Gantley Abstract: This paper presents a framework for enhancing user satisfaction for implemented information systems. Frequently, the post-implementation phase of a systems lifecycle is ignored and poor interfacing, for example, goes undetected, until users stop using the system. A poorly designed system interface becomes a barrier for users and they will become more reluctant to tolerate it. If users resist working with the technology, the potential for the system to generate significant organisational performance gains may be lost, rendering the introduced system a costly mistake. This framework focuses on six different elements, including: training, functionality, reliability, working environment and interfacing, all centred around the actual users of the system, with the aim to ensure total user satisfaction towards the new system. Title: A NEW APPROACH TO THE AUTOMATIC WEB ACCESSIBILITY Author(s): Juan Manuel Fernández, Vicenç Soler and Jordi Roig Abstract: For a user, the total Web Accessibility is a goal that can be reached only by means of the aid of automatic tools. The number of Web pages created in an inaccessible manner is very high, and to convert them to be accessible can be impossible. The economic cost is unacceptable for an institution or commercial company, and the time that may take is higher than the time to develop a new one. We present a study of different tools that can help to obtain a total Web Accessibility. With this study, we can see the advantages of the automatic correction and the possibilities that this kind of tools offer. The result of the study shows how the applications adapt the Web pages to both the World Wide Web Consortium HTML grammar and to the Web Content Accessibility Guidelines. With this study we find an automatic way to improve Web Accessibility and to solve the problems that it can offer to the Web pages authors. This way consists of combining two tools. This combination gives us results that allow disabled people to use a system that can convert the entire Web in an accessible way. Title: EMPIRICAL MULTI-ARTIFACT KNOWLEDGE MODELING FOR DIALOGUE SYSTEMS Author(s): Porfírio Filipe and Nuno Mamede Abstract: This paper presents a knowledge modeling approach to improve domain-independency in Spoken Dialogue Systems (SDS) architectures. We aim to support task oriented dialogue management strategies via an easy to use interface provided by an adaptive Domain Knowledge Manager (DKM). DKM is a broker that centralizes the knowledge of the domain using a Knowledge Integration Process (KIP) that merges on-the-fly local knowledge models. A local knowledge model defines a semantic interface and is associated to an artifact that can be a household appliance in a home domain or a cinema in a ticket-selling domain. We exemplify the reuse of a generic AmI domain model in a home domain and in a ticket-selling domain redefining the abstractions of artifact, class, and task. Our experimental setup is a domain simulator specially developed to reproduce an Ambient Intelligence (AmI) scenario. Title: IMPROVING HTML DATA TABLES NAVIGATION - A METHOD TO OBTAIN INFORMATION FOR VISUALLY IMPAIRED PEOPLE Author(s): Juan Manuel Fernández, Vicenç Soler and Jordi Roig Abstract: Nowadays the broad use of the new technologies based on the Web gives facilities to people all over the world, but for impaired people. This leads us to the field of Web Accessibility and one of the biggest problems in it is the use of data tables on HTML documents. For disabled users, elements such as these and their natural bi-dimensional structure make more difficult to navigate than for the rest of the users. In this paper we present a solution to avoid those difficulties that disabled users find while navigating. The system we propose is based on the way a non-disabled person visualizes the table contents, but avoiding the processing of the images which is the natural procedure. Title: ANATOMY OF A VISUALIZATION ON-DEMAND SERVER - A SERVICE ORIENTED ARCHITECTURE TO VISUALLY EXPLORE LARGE DATA COLLECTIONS Author(s): Romain Vuillemot, Béatrice Rumpler and Jean-Marie Pinon Abstract: Facing the relentless information volume increase, users are not only lost in information overload, but also among the various ways to depict it. In this paper, we tackle this issue by providing end-users access to up-to-date visualizations techniques, using remote services coupled to their local interactive environment. The outline of a Visualization-On-Demand (VizOD) architecture is introduced, packaging information visualization processes into services reachable over a network. Our goal is to provide end-users flexible and personalized visual overviews of large datasets. We implemented a prototype partially validating our architecture, and discuss preliminary results of our experiments and give future work perspectives. Title: ISSUES IN IS BASED ENGINEERING ASSET MANAGEMENT - AN AUSTRALIAN PERSPECTIVE Author(s): Abrar Haider Abstract: Engineering organisations are increasingly investing in information systems to support the lifecycle of the assets managed by them. These information systems carry the promise of integrating the activities of an asset lifecycle through a process-driven approach to operational efficiency. Despite these premise, current status of information systems adoption in engineering enterprises suggests a gap between the potential and the actual value enabled by them to the organisation. This paper reports the experience of an Australian asset managing organisation in terms of maximising value of the information systems for asset lifecycle. It aims to identify the issues and challenges that impede the value profile of these systems in relation to process efficiencies as well as control and management of asset lifecycle. The findings point to, among others, multiplicity of systems and lack of their integration, lack of fit between processes and technology, and lack of consideration to manage asset lifecycle learnings. Title: IS BASED ASSET MANAGEMENT - AN EVALUATION Author(s): Abrar Haider Abstract: Engineering assets managing organisations use a variety of information systems for process efficiency, control, and management. The essential aim of these systems is to provide an integrated view of asset lifecycle, along with acquisition, exchange, manipulation, and storage of lifecycle information. However, IS investments in infrastructure asset management are often fragmented, seldom involve all project participants, and rarely extend across more than a single phase of the project’s life cycle. This paper highlights these issues through a case study conducted in an Australian water infrastructure asset managing organisation. Title: AN APPROACH TO GUIDELINE INSPECTION OF WEB PORTALS Author(s): Andrina Granić, Ivica Mitrović and Nikola Marangunić Abstract: The objective of the overall research is the design of discount evaluation methodology for web portal assessment. Our experience accords with the claim that we should not rely on isolated evaluations, but instead combine assessment methods. A number of problems was identified through testing user tasks in scenario-based usability testing while others were discovered throughout tasks mentally simulated by HCI experts using an inspection method. This paper reports on the experience regarding guideline inspection of broad-reach web portals. Although through obtained comprehensive quantitative and qualitative data we found that the designed guideline inspection needs some improvements, it has proved very promising. Revision of the evaluation form along with subsequent assessment with adequate expert sample is needed. Title: WEB BROWSING FOR VISUALLY IMPAIRED Author(s): Gulden Uchyigit Abstract: Availability of information is no longer an issue. The internet is a vast resource where details relating to any subject are available almost instantaneously; however availability does not equal accessibility. The internet is a highly visual medium and users with visual disabilities including blindness and print impairments have no choice but to use sub-standard screen readers or expensive Braille displays. Poorly designed web pages that do not conform to accessibility standards or have complex layouts are bewildering and users have difficulty in distinguishing between useful and irrelevant information. This paper presents a system which allows visually impaired users to interact with a computer in a more effective manner. The system employs speech synthesis as an inexpensive and portable method of output and uses text analysis methods to ease the process of navigation. The system’s usability and learnability are demonstrated through end-user testing. Title: TRADITIONAL LEARNING VS. E-LEARNING - SOME RESULTS FROM TRAINING CALL CENTRE PERSONNEL Author(s): Mark Miley, James A. Redmond and Colm Moore Abstract: An analysis of a survey on call centre trainees (n = 43) who underwent a traditional classroom course and an e-Learning course, showed little difference in preference for either learning mode. When triangulated against the SPOT+ study (n = 2000), the results were similar. Although both courses were dissimilar in duration (7 hours vs. 1 hour) an argument can be made for blended learning. Despite the widely expressed preference for the traditional classroom mode, it appears that the e-Learning mode can be equally acceptable, perhaps if the duration is much shorter as happened here. Title: AN INFORMATION SYSTEM FOR THE SHORTEST ORIGEN-DESTINATION ROUTE IN A TRANSPORTATION NETWORK Author(s): José Raymundo Marcial Romero, Oscar Sánchez Flores, Héctor A. Montes Venegas, Luis Nuñez Vázquez and Israel Hernández Sánchez Abstract: There exist several information systems to find and display the shortest origin-destination route in a transportation network. Usually these systems are created by organizations purchasing expensive software aiming to build them easily, quickly and correctly. This goal, however, still remains elusive. In this paper, an information system that finds and displays the shortest origin-destination route in a transportation network is presented. The system is mainly built using free software. In the literature, there were found at least three fast algorithms to obtain the shortest route in a given network. From these three, one was selected for being more efficient and less memory consuming according to the experimental analysis carried out. The functionality of the system is showed using Toluca’s transportation network, one of the largest cities in Mexico. Title: A COLLABORATIVE WEB SYSTEM TO IMPROVE CITIZENS-ADMINISTRATION COMMUNICATION Author(s): V. M. R. Penichet, M. Tobarra, M. D. Lozano, J. A. Gallud and F. Montero Abstract: Improving the quality of service and the simplification of procedures and tasks is one of the public administration goals. Administrative procedures in town councils, intelligent agents, workflow processes and Web-based computing are some issues which can be mixed to get a user-oriented system able to support feedback between citizens and their Town Council. Notifications by means of e-mails and messages in user’s intranet facilitate user-to-civil servant and system-to-user communication and collaboration. In this paper, a Complaints and Suggestions Web-Based Collaborative Procedure (CS-WCP) is presented as an advanced e-administration solution. All the administrative procedure steps are well analyzed by workflow modelling, and then every task is coordinated. Intelligent agents allow performing some tasks, which used to be done in a manual manner, in an automatic way. This system allows people with different cultures, religions, knowledge, nature and necessities to interact with the local administration in an easy and intuitive way. Title: INFORMATION TECHNOLOGY IN VIRTUAL ENTERPRISE Author(s): Stefan Trzcielinski and Aleksander Jurga Abstract: There is a common belief expressed in the subject literature that information technology (IT) is critical contingency factor for virtual organization. Therefore our interest was to investigate the relation between the intensity level of IT use and the level of the organization virtuality. In that purpose we elaborated a tree of features describing virtuality and categorized IT tools into two groups taking into considerations the functions which they play or can play in virtual organization and the range of their influence to cope with cooperation of distributed partners. The research sample included 45 firms, mostly small and medium, belonging to six branches. The firms had been chosen according to pre-selection criteria, confirming that they construct a network to run their operations. Some indicators were elaborated to measure the level of virtuality and intensity of use of IT. Next correlation was checked between these two levels. In this paper we present what we have found about the relation. Title: A REASONED APPROACH TO ERROR HANDLING - POSITION PAPER ON WORK-IN-PROGRESS Author(s): Tamara Babaian and Wendy Lucas Abstract: It is widely acknowledged that Enterprise Resource Planning (ERP) systems are difficult to use. Our own studies have revealed that one of the largest sources of frustration for ERP users is the inadequate support in error situations afforded by these systems. We propose an approach to error handling in which reasoning on the part of the system enables it to behave as a collaborative partner in helping its users understand the causes of errors and, whenever possible, make the necessary corrections. While our focus here is on ERP systems, this approach could be applied to any system for improving its error handling capabilities. Title: USER PSYCHOLOGICAL APPRAISAL OF ENTERPRISE WEB 2.0-DEPLOYMENT Author(s): Sacha Helfenstein Abstract: Effective exploitation of emerging Web-based social information and communication tools has become the new mandate in contemporary enterprise IT-strategy. However, currently available assessments and recommendations are generally biased in favouring normative technical and business considerations, and deficient in their emphasis of the implementation-perspective on technology deployment. The current paper propagates the enhancement of Enterprise Web 2.0-research and discourse by placing it into the focal point of a multi-disciplinary scientific approach comprising services, design, and, importantly, user science. User psychological insight is then used as basis for contending essential human adoption barriers and arising dissonances between technical and human use-related promises of Web 2.0; both of which need to be recognized and productively dealt with in the organizational context of Enterprise Web 2.0 adoption. ICEIS Doctoral Consortium Title: WORKFLOW MANAGEMENT SYSTEMS AND AGENTS. DO THEY FIT TOGETHER? Author(s): Pavlos Delias Abstract: Workflow management systems are an emerging category of information systems, currently under dynamic evolution. On the other hand software agents is a distinct research area and an also emerging paradigm for information systems design and development. In this paper, I outline the major points of a doctoral thesis that will focus on the intersection of these two fields. I try to clarify the thesis specific objectives and describe the motivation underneath. The general methodology as well as some initial findings are also described. Title: BUSINESS PROCESS KNOWLEDGE BASE Author(s): Giovanni Pignatelli and Gianmario Motta Abstract: This paper presents a Knowledge Management System to support the design of performing business processes. The system is a joint project of the Systems and Information Department (University of Pavia) and BIP (Business Integration Partners). To organize the knowledge about business processes the system uses and integrates some major conceptual frameworks on business processes and performances. These include a stakeholder oriented framework of business process performances (called HIGO) that supports the analyst in defining sustainable and competitive performances. Second, the business process taxonomy of the MIT process handbook allows an easier navigation on the collection of process knowledge stored in the system. Third, the Service Level Agreement framework helps organizations in setting and controlling performance objectives for processes. The knowledge base stores both the above knowledge structures and case studies in different forms, such as diagrams, text, multimedia. The analyst can navigates stored case studies and create new processes by using the design methodology, that includes the definition of process structure, the association of performances to the process structures and the design of a individual process performances by using inheritance methods. The knowledge base software is a Web 2.0 application, that enables collaborative approaches and assists the user in navigating models by industry and by case examples. Title: RAPID ONTOLOGY DEVELOPMENT MODEL BASED ON BUSINESS RULES MANAGEMENT APPROACH FOR THE USE IN BUSINESS APPLICATIONS Author(s): Dejan Lavbič and Marjan Krisper Abstract: Ontologies as means for knowledge manipulation in IT have gained its popularity in recent years. The scenarios of successful implementation can mainly be found in the World Wide Web domain and within academia while there are only a few in business environment. This paper introduces Rapid Ontology Development model and accompanying intelliOnto support tool to facilitate ontology construction for inclusion in business applications. Emphasis is given to simplification of the development process of functional components by bridging the gap between formal syntax of captured knowledge and acquisition of knowledge in semi formal way. The primary steps of the process therefore adopt informal modelling methods such as mind maps approach with several transformations and interfaces introduced. That enables business users to manipulate the ontology without any detailed technical knowledge in building ontology with higher semantic expressiveness. While majority of existing approaches end with successful construction of ontology in this approach steps for deployment of developed ontology in a form of functional component and redeployment with versioning is foreseen and supported. Verification of the model will be presented with running examples from the domain of financial portfolio management and organisation of a rent-a-car etc. Title: REINFORCED LEARNING OF CONTEXT MODELS FOR UBIQUITOUS COMPUTING - A UBIQUITOUS PERSONAL ASSISTANT Author(s): Sofia Zaidenberg, Patrick Reignier and James Crowley Abstract: Ubiquitous environments may become a reality in a foreseeable future and research is aimed on making them more and more adapted and comfortable for users. Our work consists on applying reinforcement learning techniques in order to adapt services provided by a ubiquitous assistant to the user. The learning produces a context model, associating actions to perceived situations of the user. Associations are based on feedback given by the user as a reaction to the behavior of the assistant. Our method brings a solution to some of the problems encountered when applying reinforcement learning to systems where the user is in the loop. For instance, the behavior of the system is completely incoherent at the beginning and needs time to converge. The user does not accept to wait that long to train the system. The user’s habits may change over time and the assistant needs to integrate these changes quickly. We study methods to accelerate the reinforced learning process. Title: SEMANTIC INTEROPERABILITY IN MULTI-AGENT SYSTEMS Author(s): Anastasia Karanastasi and Nikolaos Matsatsinis Abstract: The Semantic Web promises to change the way agents navigate and utilize information over the internet. By providing a structured, distributed representation for expressing concepts and relationships defined by multiple domain ontologies, it is now possible for agents to read and reason about knowledge without the need of centralized ontologies. For the successful completing of a transaction over the internet there might be a need of using more than one Multi-Agent Systems (MASs) that serve different tasks. The problem then will be the lack of interoperability between the systems, semantically and operationally based on different development platforms, communication and content languages and the heterogeneous domain ontologies that the systems use. In this thesis, we want to target the problem of interoperability between heterogeneous MASs by providing a common infrastructure with mapping, translation and generating mechanisms. In the frames of this research, we study and record the transformations of information and the rules for the smooth passage from a MAS to another. Title: AN ANTI-SPAM ARCHITECTURE COMBINING VISUAL AND SEMANTIC FEATURES Author(s): Francesco Gargiulo and Antonio Penta Abstract: It is well known that Unsolicited Commercial Emails (UCE), commonly known as spam, are becoming a serious problem for email accounts of single users, small companies and large institutions. We address the problem of recognizing the spam message in the text and image part. The aim of our research is to define a methodology and design an architecture in order to overcome some problems that are still boarded on the state-of-art spam-filter. The approach take into account the semantic richness of natural language and the spam evolution such as spam-images. We finally propose an experimental planning and a comparison from existing tools. Title: APPLYING CONTROL SYSTEMS STATE OF THE ART TO ORGANIZATIONAL ENGINEERING Author(s): Sérgio Guerreiro, André Vasconcelos and José Tribolet Abstract: This paper presents a PhD research plan related with the alignment problem between the Enterprise Architecture (re)definition and the Information Systems Architecture (re)implementation. This problem is located in the Organizational Information Systems area hence exists an integration between Human, machines and information. The goal of this research is to define a control methodology that (i) allows the specification of an Information Systems Architecture starting from an a priori defined Enterprise Architecture reference and that (ii) enables the Information Systems Architecture specification alignment whenever facing Enterprise Architecture reference changes in time. To achieve those goals we propose to study the state of the art related with the control systems taken from disparate scientific areas and to identify the probably useful concepts and models to this research. Control systems taxonomy is proposed in order to classify the different application fields versus control schemas used. The research activity network, the research methodology to be used and the expected outcome are also identified. Special Session on Computational Intelligence using Affinity Set Title: PREDICTING THE ARRIVAL OF EMERGENT PATIENT BY AFFINITY SET Author(s): Yuh-Wen Chen, Moussa Larbani and Chao-Wen Chen Abstract: Predicting the time series of emergent patient arrival is valuable in monitoring/tracking the daily patient flow because these efforts keep doctors alarmed in advance. We consider a prediction problem of the time series generated by actual arrival of emergent patient. Traditionally, such a problem is analyzed by moving average method, regression method, exponential smoothing method or some existed evolutionary methods. However, we propose a new affinity model to accomplish this goal. Our data of time series is actually recorded from hour to hour (hourly data) for three days: the data of the first two days are used to generate/train prediction model; after that, the data of the final day is used to test our prediction results. Two types of model: affinity model and neural network model are used for comparing their performances. Interestingly, the affinity model performs better prediction results. This hints there could be a special pattern within the time series generated by actual arrival of emergent patient. Title: MULTIVARIATE TECHNIQUE FOR CLASSIFICATION RULE SEARCHING - EXEMPLIEIED BY CT DATA OF NECK-PAIN PATIENT Author(s): Jyhjeng Deng Abstract: In the process of classification rule searching for multivariate categorical data, it is crucial to find a quick start to locate which combination the levels of input variable and output variable can contribute to the most correct predicted response (output). We proposed Fisher’s linear discriminant function to select some important input variable candidates, then use correspondence analysis to pin down which level of the input variable candidates are closely related to which level of output variable. The closest linkage between the level of input variable and the level of output variable is chosen as the rule for each input variable candidate. The algorithm is applied to a hospital data of neck-pain patient whose CT scan diagnosis are needed to be decided. The result shows that our algorithm is not only quicker than the exhaustive research but also our result is identical to the optimum solution by exhaustive search. Title: CREDIT SCORING MODEL BASED ON THE AFFINITY SET Author(s): Jerzy Michnik, Anna Michnik and Berenika Pietuch Abstract: The significant development of credit industry led to growing interest in sophisticated methods which can support making more accurate and more rapid credit decisions. The parametric statistical methods such as linear discriminant analysis and logistic regression were soon followed up by nonparametrical methods and other techniques: neural networks, decision trees, and genetic algorithms. This paper investigates the affinity set -- a new concept in data mining field. The affinity set model was applied to credit applications database from Poland. The results are compared to those received by “Rosetta” (the rough sets and genetic algorithm procedure) and logistic regression. Special Session on Computer Supported Activity Coordination Title: EASING THE ONTOLOGY DEVELOPMENT AND MAINTENANCE BURDEN FOR SMALL GROUPS AND INDIVIDUALS Author(s): Roger Tagg, Harshad Lalwani and Raaj Srinivasan Kumaar Abstract: Most attempts to aid overworked knowledge workers by changing to a task focus depend on the provision of computer support in categorizing incoming documents and messages. However such categorization depends, in turn, on creating - and maintaining - a categorization scheme (taxonomy, lexicon or ontology) for the user’s (or the group’s) work structure. This raises the problem that if users are suffering from overload, they are unlikely to have the time or expertise to build and maintain an ontology – a task that is recognized to be not a trivial one. This paper describes ongoing research into what options may exist to ease the ontology management burden, and what are the advantages and problems with these options. Title: IMPROVING THE CUSTOMER INTELLIGENCE WITH CUSTOMER ENTERPRISE CUSTOMER MODEL Author(s): Domenico Consoli, Claudia Diamantini and Domenico Potena Abstract: Considering that the customer is an enterprise strategic asset, it is very important to set a bi-directional communication dynamic channel between the customer and the enterprise that allows the interaction with the customer. In this paper we propose a model called Customer enterprise Customer (CeC) which cross all internal business functions, from design to production. This model feels customer opinions, collected by the tools of the Web 2.0, and activates a process to improve products/services for customer satisfaction. The CeC is a new model placed on the top of enterprise and it is integrated with enterprise information system EERP (Extended Enterprise Resource Planning). Title: A GENERAL-PURPOSE TOOL FOR DOCUMENTING DISTRIBUTED LABORATORY ASSAYS Author(s): Patrick Arnold, Oliver Kusche and Andreas Schmidt Abstract: In the context of a publicly funded cooperative project aimed at the assessment of potential risks related to nanoparticles, a general-purpose system for documenting the laboratory assays is presented. The system is designed to enable users distributed over various organizations to jointly document their activities, yielding a data base that allows for the reproducibility of the entire lifecyle of any given specimen. As requirements and activities are constantly evolving, the concept is focused on a flexible approach where new activity types can be added and configured at runtime. Title: MULTI-AGENT NEGOTIATION IN A SUPPLY CHAIN - CASE OF THE WHOLSALE PRICE CONTRACT Author(s): Omar Kallel, Ines Ben Jaafar, Lionel Dupont and Khaled Ghedira Abstract: In this paper, we propose a multi-agent negotiation model for the wholesale price contract (price: W, quantity: Q) in a supply chain with a retailer buying from several subcontractors. We assumed that the retailer stocks up from several subcontractors in order to face a market with fixed demand. Each subcontractor has a normal production capacity (CN) which can be increased until a maximal capacity (CM) but with an additional cost. The demand is superior to the sum of normal capacities and inferior to the sum of maximal capacities. Thereby, the negotiated and agreed price between the retailer and each subcontractor relies on the ordered quantity and the extra cost generated by any excess capacity (above the CN level). Under asymmetric informational context, we propose a multi-agent model which is a duplication of the considered supply chain; subcontractor agents negotiate a combination (price, quantity) in order to maximize their benefits and a retailer agent negotiates several combinations (price, quantity) with the different subcontractor agents in order to satisfy demand, allocate quantities and maximize its margin. Experimental results shows that the objective of reaching agreements and establish a long-lasting “win-win” partnership is totally reached but repartition of benefits is not so fair. Title: ADAPTIVE RISK MANAGEMENT IN DISTRIBUTED SENSOR NETWORKS Author(s): Floriano Caprio, Rossella Aiello and Giancarlo Nota Abstract: The risk management in a distributed sensor network charged to put environmental variables under control is receiving great attention in recent years. We propose a framework that considers an high level model together with a distributed system based on adaptive agents able to handle the complete risk lifecycle at various levels of responsibility. The paper first describes the risk modeling problem in a distributed sensor network, then introduces three fundamental agent types: the risk monitoring, the local monitoring and the global monitoring, used to build a network that supports risk management in a distributed environment. Then, the adaptive management of risk exposure is described in terms of a decision process based on a tight cooperation among Local Monitoring Agents. The framework is general enough to be applied in several appication domain. Workshop on Pattern Recognition in Information Systems Title: NEURAL APPROACHES TO IMAGE COMPRESSION/DECOMPRESSION USING PCA BASED LEARNING ALGORITHMS Author(s): Luminita State, Catalina Cocianu, Panayiotis Vlamos and Doru Constantin Abstract: Principal component analysis allows the identification of a linear transformation such that the axes of the resulted coordinate system correspond to the largest variability of the investigated signal. The advantages of using principal components reside from the fact that bands are uncorrelated and no information contained in one band can be predicted by the knowledge of the other bands, therefore the information contained by each band is maximum for the whole set of bits. Aiming to obtain a guideline for choosing a proper method for a specific application we developed a series of simulations on some the most currently used PCA algorithms as GHA, Sanger variant of GHA and APEX. The paper reports the conclusions experimentally derived on the convergence rates and their corresponding efficiency for specific image processing tasks. Title: FEATURE-BASED WORD SPOTTING IN ANCIENT PRINTED DOCUMENTS Author(s): Khurram Khurshid, Claudie Faure and Nicole Vincent Abstract: Word spotting/matching in ancient printed documents is an extremely challenging task. The classical methods, like correlation, seem to fail when tested on ancient documents. So for that, we have formulated a multi-step document analysis mechanism which mainly revolves around finding the words and their characters in the text and attributing each character by some multi-dimensional features. Words are matched by comparing these multi-dimensional features of the characters using Dynamic Time warping (DTW). We have tested this approach on ancient document images provided by the BIUM (Bibliothèque Interuniversitaire de Médecine, Paris). Our Initial experiments exhibit extremely encouraging results having more than 90% precision and recall rates. Title: HOW TO DEFINE LOCAL SHAPE DESCRIPTORS FOR WRITER IDENTIFICATION AND VERIFICATION Author(s): Imran Siddiqi and Nicole Vincent Abstract: This paper presents an effective method for writer identification and verification in handwritten documents. The idea is that within a handwritten text, there exist certain redundant patterns that a particular writer would use frequently as he writes and these forms could be exploited to identify/verify the authorship of a document. To extract these patterns, the text is divided into a large number of small sub-images and a set of shape descriptors is extracted from each. Similar patterns are then clustered together for which a number of clustering techniques have been evaluated. The writer of the unknown document is identified by Bayesian classifier. The system trained and tested on 55 documents of the same number of authors, exhibited promising results. Title: IMPROVEMENTS IN DETECTION AND CLASSIFICATION OF PASSING OBJECTS FOR A SECURITY SYSTEM Author(s): Ricardo Sánchez-Sáez, Alfons Juan, Taizo Umezaki, Yuki Inoue, Masahiro Hoguro and Setta Takefumi Abstract: Pattern recognition techniques can be successfully used in the constructions of video surveillance systems. In this work a video-based security system that detects and classifies laterally crossing objects, introduced in a previous paper, is reviewed. More reliable results for the system are presented, obtained by performing a leaving one out on the data corpus rather than employing a manual approach. Other alternatives in the pattern preprocessing are explored: we employ greyscale patterns, and implement a different method for calculating difference images of consecutive video frames. A final benchmark of the classification part is done comparing the results obtained using the original method, dynamic time warping, to the ones obtained using discrete hidden Markov models plus vector quantization. Title: EMPLOYING WAVELET TRANSFORMS TO SUPPORT CONTENT-BASED RETRIEVAL OF MEDICAL IMAGES Author(s): Carolina W. da Silva, Marcela X. Ribeiro, Agma J. M. Traina and Caetano Traina Jr. Abstract: This paper addresses two important issues related to texture pattern retrieval: feature extraction and similarity search. We use discrete wavelet transforms to obtain the image representation from a multiresolution point of view. Features of approximation subspaces compose the feature vectors, which succinctly represent the images in the execution of similarity queries. Also, wavelets and multiresolution method are used to overcome the semantic gap that exists between low level features and the high level user interpretation of images. It also deals with the "curse of dimensionality", which involves problems with a similarity definition in high-dimensional feature spaces. This work was evaluated with two different image datasets and the results show an improvement of up to 90% in the query results using the Daubechies wavelet transform. Title: AN INTELLIGENT CLINICAL DECISION SUPPORT SYSTEM FOR ANALYZING NEUROMUSCULOSKELETAL DISORDERS Author(s): Nigar şen köktaş, Neşe yalabik and Güneş yavuzer Abstract: This study presents a clinical decision support system for detecting and further analyzing neuromusculoskeletal disorders using both clinical and gait data. The system is composed of a database storing disease characteristics, symptoms and gait data of the subjects, a combined pattern classifier processing these data and user friendly interfaces. Data is mainly obtained through Computerized Gait Analysis, which can be defined as numerical representation of the mechanical measurements of human walking patterns. The decision support system uses mainly a combined classifier to incorporate the different types of data for better accuracy. A decision tree is developed with Multilayer Perceptrons at the leaves. The system is planned to be used for various neuromusculoskeletal disorders such as Cerebral Palsy (CP), stroke, and Osteoarthritis (OA) with small differences. First experiments are performed with OA. Subjects are allocated into four OA-severity categories, formed in accordance with the Kellgren-Lawrence scale: “Normal”, “Mild”, “Moderate”, and “Severe”. A classification accuracy of 80% is achieved on test set. To complete the system, a patient follow-up mechanism is also designed. Title: SCORING SYSTEMS AND LARGE MARGIN PERCEPTRON RANKING USING POSITIVE WEIGHTS Author(s): Bernd-Jürgen Falkowski and Arne-Michael Törsel Abstract: Large Margin Perceptron learning with positive coefficients is proposed in the context of so-called scoring systems used for assessing creditworthiness as stipulated in the Basel II central banks capital accord of the G10-states. Thus a potential consistency problem can be avoided. The approximate solution of a related ranking problem using a modified large margin algorithm producing positive weights is described. Some experimental results obtained from a Java prototype are exhibited. An important parallelization using Java concurrent programming is sketched. Thus it becomes apparent that combining the large margin algorithm presented here with the pocket algorithm can provide an attractive alternative to the use of support vector machines. Related algorithms are briefly discussed. Title: ROBUST PATTERN RECOGNITION WITH NONLINEAR FILTERS Author(s): Saúl Martínez-Díaz and Vitaly Kober Abstract: Nonlinear composite filters for robust and illumination-invariant pattern recognition are proposed. The filters are based on logical and rank order operations. The performance of the proposed filters is compared with that of various linear composite filters in terms of discrimination capability. Computer simulation results are provided to illustrate the robustness of the proposed filters when a target is embedded into cluttered background with unknown illumination and corrupted by additive and impulsive noise. Title: A COMPARATIVE STUDY OF CLUSTERING VERSUS CLASSIFICATION OVER REUTER'S COLLECTION Author(s): Leandro Krug Wives, Stanley Loh and José Palazzo Moreira de Oliveira Abstract: The easiness to generate digital content stimulates the accumulation of more and more information. People have tons of information at their disposal over the internet and on their personal hard-drive. The problem is that, even with the advent of search engines, it is still complex to analyze, understand and select relevant information. In this sense, clustering techniques sound very promising, grouping related information in an organized way. This paper address some problems of the existing document clustering techniques and present the “best star” algorithm, which can be used to group and understand chunks of information and find the most relevant ones. One important aspect is how to evaluate the clustering result. Thus, we have applied our technique over Reuters’ collection and compared its results with classification techniques already applied on the same collection. Title: AN INTEGRATED SYSTEM FOR ACCESSING THE DIGITAL LIBRARY OF THE PARLIAMENT OF ANDALUSIA: SEGMENTATION, ANNOTATION AND RETRIEVAL OF TRANSCRIPTIONS AND VIDEOS Author(s): Luis M. de Campos, Juan M. Fernández-Luna, Juan F. Huete and Carlos J. Martín-Dancausa Abstract: In this paper, an integrated system for searching the official documents published by the Parliament of Andalusia is presented. It exploits the internal structure of these documents in order to offer not only complete documents but parts of them given a query. Additionally, as the sessions of the Parliament are recorded in video, jointly to the text, the system could return the associated pieces of video to the retrieved elements. To be able to offer this service, several tools must be developed: PDF converters, video segmentation and annotation tools and a search engine, all of them with their corresponding graphic interfaces for interacting with the user. This paper describes the elements which comprises it. Title: OPTIMIZATION OF LOG-LINEAR MACHINE TRANSLATION MODEL PARAMETERS USING SVMS Author(s): Jesús González-Rubio, Daniel Ortiz-Martínez and Francisco Casacuberta Abstract: The state-of-the art in statistical machine translation is based on a log-linear combination of different models. In this approach, the coefficients of the combination are computed by using the MERT algorithm with a validation data set. This algorithm presents high computational costs. As an alternative, we propose a novel technique based on Support Vector Machines to calculate these coefficients using a loss function to be minimized. We report the experiments on a Italian-English translation task showing encouraging results. Title: DETECTING CRITICAL SITUATION IN PUBLIC TRANSPORT Author(s): R. M. Luque, F. L. Valverde, E. Dominguez, E. J. Palomo and J. Muñoz Abstract: This paper presents a system information applied to video surveillance to detect and identify aggressive behaviors of people in public transport. A competitive neural network is proposed to form a background model for detecting objects in motion in the sequence. After identifying the objects and extracting its features a set of rules are applied to decide if an anomalous behavior is or not considered aggressive. Our approach is integrated in a CCTV system and is a powerful support tool for security operators to manage to detect in real time critical situations in public transport. Title: WEIGHTED AGGLOMERATIVE CLUSTERING TO SOLVE NORMALIZED CUTS PROBLEMS Author(s): Giulio Genovese Abstract: A new agglomerative algorithm is introduced that can be used as a replacement for any partitioning algorithm that tries to optimize an objective function related to graph cuts. In particular, spectral clustering algorithms fall in this category. A new measure of similarity is introduced to show that the approach, although radically different from the one adopted in partitioning approaches, tries to optimize the same objective. Experiments are performed for the problem of image segmentation but the idea can be applied to a broader range of applications. Title: INFORMATION THEORETIC TEXT CLASSIFICATION METHODS EVALUATION Author(s): David Pereira Coutinho and Mário A. T. Figueiredo Abstract: Most approaches to text classification rely on some measure of (dis)similarity between sequences of symbols. Information theoretic measures have the advantage of making very few assumptions on the models which are considered to have generated the sequences, and have been the focus of recent interest. This paper compares the use of the Ziv-Merhav method (ZMM) and the Cai-Kulkarni-Verd¶u method (CKVM) for the estimation of relative entropy (or Kullback-Leibler divergence) from sequences of symbols when used as a tool for text classication. We describe briefly our implementation of the ZMM based on a modified version of the Lempel-Ziv algorithm (LZ77) and also the CKVM implementation which is based in the Burrows-Wheeler block sorting transform (BWT). Assessing the accuracy of both the ZMM and CKVM on synthetic Markov sequences shows that CKVM yields better estimates of the Kullback-Leibler divergence. Finally, we apply both methods in a text classification problem (more specifically, authorship attribution) but surprisingly CKVM permforms poorly while ZMM outperforms a previously proposed (also information theoretic) method. Title: BERNOULLI HMMS FOR OFF-LINE HANDWRITING RECOGNITION Author(s): Adrià Giménez-Pastor and Alfons Juan-Císcar Abstract: Hidden Markov models (HMMs) with one or several Gaussian distributions in each state have been extensively used in handwriting recognition tasks. Bernoulli models have been applied successfully to binary images. In this paper we introduce a new model: Bernoulli HMMs. That is, HMMs with one Bernoulli distribution per state. The model have been tested with an Arabic subwords tasks, and with an English words task. Different issues have been treated in experimentation: as feature vector dimension and number of states. Promising results have been obtained. Title: WORD ALIGNMENT QUALITY IN THE IBM 2 MIXTURE MODEL Author(s): Jorge Civera and Alfons Juan Abstract: Finite mixture modelling is a standard pattern recognition technique. However, in statistical machine translation (SMT), the use of mixture modelling is currently being explored. Two main advantages of the mixture approach are first, its flexibility to find an appropriate tradeoff between model complexity and the amount of training data available and second, its capability to learn specific probability distributions that better fit subsets of the training dataset. This latter advantage is even more important in SMT, since it is widely accepted that most state-of-the-art translation models proposed have limited application to restricted semantic domains. In this work, we revisit the mixture extension of the well-known M2 translation model. The M2 mixture model is evaluated on a word alignment large-scale task obtaining encouraging results that prove the applicability of finite mixture modeling in SMT. Title: IMPROVEMENTS IN THE COMPUTER ASSISTED TRANSCRIPTION SYSTEM OF HANDWRITTEN TEXT IMAGES Author(s): Verónica Romero, Alejandro H. Toselli, Jorge Civera and Enrique Vidal Abstract: To date, automatic handwriting recognition systems are far from being perfect. Therefore, once a full recognition process of a handwritten text image has finished, human \textit{off-line} intervention is required in order to correct the results of such systems. In previous works, an interactive, on-line framework has been presented. The results obtained in these works shown that significant amounts of human effort can be saved. In this work a new way to interact with this interactive system is proposed. Now, the user only has to indicate the point where an error has ocurred and the system proposes a new suitable continuation. This new kind of interaction tries facilitate and speed up the transcription of documents. Empirical results suggest that this new interaction method can lead to further improvements on user productivity. Title: COMPARISON OF ADABOOST AND ADTBOOST FOR FEATURE SUBSET SELECTION Author(s): Martin Drauschke and Wolfgang Foerstner Abstract: This paper addresses the problem of feature selection and presents a comparison of two boosting methods, Adaboost and ADTboost, with respect to the classication error depending on the number of used training samples and the number of selected features. Therefore, we discuss both techniques and sketch their functionality, where we restrict both boosting approaches on linear weak classiers. This enables us to propose a feature subset selection method, which we evaluated on synthetic and on benchmark data sets. Title: A COLOUR SPACE SELECTION SCHEME DEDICATED TO INFORMATION RETRIEVAL TASKS Author(s): Romain Raveaux, Jean-Christophe Burie and Jean-Marc Ogier Abstract: The choice of a relevant colour space is a crucial step when dealing with image processing tasks (segmentation, graphic recognition…). From this fact, we address in a generic way the following question: What is the best representation space for a computational task on a given image? In this article, a colour space selection system is proposed. From a RGB image, each pixel is projected into a vector composed of 25 colour primaries. This vector is then reduced to a Hybrid Colour Space made up of the three most significant colour primaries. Hence, the paradigm is based on two principles, feature selection methods and the assessment of a representation model. The quality of a colour space is evaluated according to its capability to make colour homogenous and consequently to increase the data separability. Our framework brings an answer about the choice of a meaningful representation space dedicated to image processing applications which rely on colour information. Standard colour spaces are not well designed to process specific images (ie. Medical images, image of documents) so a real need has come up for a dedicated colour model. Title: INCREASING TRANSLATION SPEED IN PHRASE-BASED MODELS VIA SUBOPTIMAL SEGMENTATION Author(s): Germán Sanchis-Trilles and Francisco Casacuberta Abstract: Phrase-Based Models constitute nowadays the core of the state of the art in the statistical pattern recognition approach to machine translation. Being able to introduce context information into the translation model, they usually produce translations whose quality is often difficult to improve. However, these models have usually an important drawback: the translation speed they are able to deliver is mostly not sufficient for real-time tasks, and translating a single sentence can sometimes take some minutes. In this paper, we describe a novel technique for reducing significantly the size of the translation table, by performing a Viterbi-style selection of the phrases that constitute the final phrase-table. Even in cases where the pruned phrase table contains only 6% of the segments of the original one, translation quality is not worsened. Furthermore, translation quality remains the same in the worst case, achieving an increase of 0.3 BLEU in the best case. Title: USING A BILINGUAL CONTEXT IN WORD-BASED STATISTICAL MACHINE TRANSLATION Author(s): Christoph Schmidt, David Vilar and Hermann Ney Abstract: In statistical machine translation, phrase-based translation (PBT) models lead to a significantly better translation quality than single-word-based (SWB) models. PBT models translate whole phrases, thus considering the context in which a word occurs. In this work, we propose a model which further extends this context beyond phrase boundaries. The model is compared to a PBT model on the IWSLT 2007 corpus. To profit from the respective advantages of both models, we use a model combination, which results in an improvement in translation quality on the examined corpus. Title: VISUAL AND OCR-BASED FEATURES FOR DETECTING IMAGE SPAM Author(s): Carlo Sansone and Francesco Gargiulo Abstract: The presence of unsolicited bulk emails, commonly known as spam, can seriously compromise normal user activities, forcing to navigate through mailboxes to find the - relatively few - interesting emails. Even if a quite huge variety of spam filters has been developed until now, this problem is far to be resolved since spammers continuously modify their malicious techniques in order to bypass filters. In particular, in the last years spammers have begun vehiculating unsolicited commercial messages by means of images attached to emails whose textual part appears perfectly legitimate. In this paper we present an approach method for overcoming some of the problems that still remain with state-of-the-art spam filters when checking images attached to emails. Results on both personal and publicly available email databases are presented, in order to assess the performance of the proposed approach. Title: HANDWRITTEN TEXT NORMALIZATION BY USING LOCAL EXTREMA CLASSIFICATION Author(s): J. Gorbe-Moya, S. España-Boquera, F. Zamora-Martínez and M. J. Castro-Bleda Abstract: This paper proposes a method to normalize handwritten lines of text based on classifying a set of local extrema with supervised learning methods. The points classified as lower baseline are used to accurately estimate the slope and horizontal alignment. A second step computes the reference lines of the slope and slant corrected text in order to perform a size normalization. Experimental comparison with other well known technique has been performed showing an improvement in the recognition accuracy using HMMs. Title: NON-LINEAR TRANSFORMATIONS OF VECTOR SPACE EMBEDDED GRAPHS Author(s): Kaspar Riesen and Horst Bunke Abstract: In pattern recognition and related areas an emerging trend of representing objects by graphs can be observed. As a matter of fact, graph based representation offers a powerful and flexible alternative to the widely used feature vectors. However, the space of graphs contains almost no mathematical structure, and consequently, there is a lack of suitable algorithms for graph classification, clustering, and analysis. Recently, a general approach to transforming graphs into n-dimensional real vectors has been proposed. In the present paper we use this method, which can also be regarded as a novel graph kernel, and investigate the application of kernel principal component analysis (kPCA) on the resulting vector space embedded graphs. To this end we consider the common task of object classification, and show that kPCA in conjunction with the novel graph kernel outperforms different reference systems on several graph data sets of diverse nature. Workshop on Modelling, Simulation, Verification and Validation of Enterprise Information Systems Title: FORMAL SPECIFICATION OF MATCHMAKERS, FRONT-AGENTS, AND BROKERS IN AGENT ENVIRONMENTS USING FSP Author(s): Amelia Bădică and Costin Bădică Abstract: The aim of the paper is to precisely characterize types of middle-agents -- matchmakers, brokers and front-agents by formally modeling their interactions with requesters and providers using a process algebraic approach. Title: MODELING MULTI-AGENT LOGISTIC PROCESS SYSTEM USING HYBRID AUTOMATA Author(s): Ammar Mohammed and Ulrich Furbach Abstract: Multi-agent systems are a widely accepted solution to handle complex problems. One application of multi-agent system is autonomous logistics. In autonomous logistic processes, potentially every element in a logistic supply chain is modeled as cooperating software agent. Thus, there exist modeling languages that are used to model such multi-agent systems. However, these modeling languages do not allow verifying the properties of systems. Hybrid automata can be used to model hybrid systems by capturing both discrete and continuous changes of a system. Fortunately, hybrid automata are equipped with formal semantics that make formal methods possible to apply to them in order to prove certain properties of the specified systems. In this paper, we model multi-agent system behaviors in autonomous logistic processes using the concept of hybrid automata. With the help of model checking techniques, we can prove some properties of a modeled system before involving in the implementation of a system. Title: ACTIVE DATABASE SYSTEM REALIZED BY A PETRI NET APPROACH Author(s): Lorena Chavarría-Báez and Xiaoou Li Abstract: An active database system executes actions automatically in response to events that are taking place either inside or outside the database. Developing an active database system, especially the active rule base, is not an easy task because some (unnoticed) errors may be introduced during its construction. Also, due to rules dynamics there is no way to know in advance which rules will be fired. In this paper we present a Petri net-based approach to integrate active rules into traditional database system. Since our approach is based on a Petri net model, we have analysis tools for debugging (by using the incidence matrix) and simulating (through token's flow) active rules. We implemented our approach as a software system called ECAPNSim which not only has verification and simulation functionality but also allows us to develop multi-platform applications, it can communicate with several database management systems, so that, a unique active rule base can work independently of the DBMS. Title: AN APPROACH TO SIMULATE ENTERPRISE RESOURCE PLANNING SYSTEMS Author(s): André Bögelsack, Holger Jehle, Holger Wittges, Jörg Schmidl and Helmut Krcmar Abstract: Enterprise Resource Planning (ERP) systems are an essential part of the infrastructure to run and support a company’s business processes. These systems have to be updated frequently to satisfy law regulations, needed functionality and sustain stability within a changing technical environment. Having these needs for change in mind the impact of related updates is often not transparent to the operator and may cause unwanted side-effects. The way to prevent update problems is running a shadow system and deploy changes there first. However this is not always feasible due to monetary or other reasons. One proper approach might be the simulation of ERP systems. Therefore this workshop paper shows a process how a simulation model for complex ERP systems might be developed. The paper focuses on the development of an adequate structure to represent complex ERP system architectures. For the development of this structure it utilizes the idea of the Enterprise Service Oriented Architecture (SOA) paradigm and the Open System Interconnection (OSI) reference model. The basic approach focuses on a so called multi-layer service map, which contains all services inside an ERP system and interdependencies between these services. This multi-layer service map can be used as data basis to create a simulation model of the analysed ERP system later on. Title: APPROACHES TO AN ALL-ENCOMPASSING FORMAL SEMANTICS FOR THE UML Author(s): María Victoria Cengarle Abstract: More than ten years have passed since first attempts to develop a precise semantics for the UML were discussed. Since then, various approaches have been proposed, and much experience has been gained. There still does not exist, however, a commonly agreed upon, formal semantics definition. In this talk I’ll report on ongoing work to develop an all-encompassing formal semantics for the UML. Title: FORMAL GOAL-BASED MODELING OF ORGANIZATIONS Author(s): Viara Popova and Alexei Sharpanskykh Abstract: Each organization exists for the achievement of certain goals. To ensure continued success, the organization should monitor its performance w.r.t. these goals. Performance is often evaluated by estimating performance indicators (PIs). In most existing organization modeling approaches the relation between PIs and goals is implicit. This paper proposes a formal approach for modeling goals based on PIs and defines mechanisms for establishing goal satisfaction, which enable evaluation of organizational performance. Analysis and methodological issues related to goals are briefly discussed. Title: THE LINEAR CONDITIONAL PROBABILITY MATRIX GENERATOR FOR IT GOVERNANCE PERFORMANCE PREDICTION Author(s): Mårten Simonsson, Robert Lagerström and Pontus Johnson Abstract: The goal of IT governance is not only to achieve internal efficiency in an IT organization, but also to support IT’s role as a business enabler. The latter is an ability here denoted IT governance performance, and it cannot be controlled by IT management directly. Their realm of control includes IT gov-ernance maturity indicators such as the existence of different IT activities, documents, metrics and roles. Current IT governance frameworks are suitable for describing IT governance, but lack the ability to predict how changes to the IT governance maturity indicators affect the IT governance performance. This paper presents a Bayesian network for IT governance performance prediction, learned with experience from 35 case studies. The network learns using the proposed Linear Conditional Probability Matrix Generator. The resulting Bayesian network for IT governance performance prediction can be used to support IT governance decision-making. Title: A PETRI NET BASED APPROACH TO MODELLING RESOURCE CONSTRAINED INTERORGANIZATIONAL WORKFLOWS Author(s): Oana Prisecaru Abstract: Interorganizational workflows represent a technique that offers companies a solution for managing business processes that involve more than one organization. In this paper, an interorganizational workflow will be modelled using a special class of nested Petri nets, resource constrained interorganizational workflow nets (RIWF-nets). This approach will allow the specification of the local workflows in the organizations involved and of the communication structure between them, permitting a clear distinction between these components. In our model, the resources from a local workflow can be represented explicitly and shared with other participating workflows. Title: WEAKLY CONTINUATION CLOSED HOMOMORPHISMS ON AUTOMATA Author(s): Thierry Nicola and Ulrich Ultes-Nitsche Abstract: A major limitation of system and program verification is the state space explosion problem. To avoid this problem, there exists several approaches to reduce the size of the system state space. Some methods try to keep the state space small during the verification run, other methods reduce the original state space prior to the verification. One of the later are abstraction homomorphisms. Weakly Continuation Closed homomorphisms are abstraction homomorphisms preserving exactly those properties of the original behaviour which are satisfied inherently fair. However, the practical use of wcc homomorphisms is limited by the lack of a reasonably efficient algorithm, checking whether or not a homomorphism is wcc and performing reasonably well. This paper presents a result which allows to develop algorithms for wcc on automata. Title: CHECKING INHERENTLY FAIR LINEAR-TIME PROPERTIES IN A NON-NAÏVE WAY Author(s): Thierry Nicola, Ulrich Ultes-Nitsche and Frank Niessner Abstract: An alternative verification relation for linear-time properties is introduced which uses an inherent fairness condition. That relation is specifically tailored to the verification of distributed systems under a relaxed version of strong fairness. We will call it the inherently fair linear-time verification relation in this paper, or IFLTV relation for short. We present an analysis of the mathematical structure of the IFLTV relation, which enables us to obtain an improved non-naive procedure for checking the IFLTV Title: A MODEL TRANSFORMATION FRAMEWORK FOR MODEL DRIVEN ENGINEERING Author(s): Xiaoping Jia, Hongming Liu, Lizhang Qin and Adam Steele Abstract: Model Driven Engineering(MDE) is a model-centric software development approach aims at improving the quality and productivity of software development processes. While some progresses in MDE have been made, there are still many obstacles in realizing the full benefits of model driven engineering. These obstacles include incompleteness in existing modeling notations, inadequate in tools support, and the lack of effective model transformation mechanism. This paper presents a new model driven engineering framework, which is based on a formal modeling notation -- Z-based Object-Oriented Modeling notation (ZOOM). It includes a set of supporting tools aiming at delivering the benefits in practical applications of model driven engineering. In particularly, this proposal focuses on one key aspect of MDE -- model transformation. A template based model transformation framework using Hierarchical Relational Meta-model (HRM) is introduced. This framework aims to provide a simple, effective, and practical way to define model transformations. The potential benefits of the proposed model transformation framework include: 1) readability and rigorousness of meta-model definitions; 2) simplicity of transformation definition; and 3) extensibility of transformation templates. The architecture and design of the framework is discussed and comparisons with related research work are provided to show the benefits of this framework. Title: AN EXECUTABLE SEMANTICS OF OBJECT-ORIENTED MODELS FOR SIMULATION AND THEOREM PROVING Author(s): Kenro Yatake and Takuya Katayama Abstract: This paper presents an executable semantics of OO models. We made it possible to conduct both simulation and theorem proving on the semantics by implementing its underlying heap memory structure within the expressive intersection of the functional language ML and the theorem prover HOL. This paper also presents a verification system ObjectLogic which supports simulation and theorem proving of OO models based on the executable semantics. As an application example, we show a verification of a UML model of a practical firewall system. Title: MODELLING MULTI-AGENT SYSTEMS WITH ORGANIZATIONS IN MIND Author(s): Matthias Wester-Ebbinghaus and Daniel Moldt Abstract: Software systems are subject to increasing complexity and in need of efficient structuring. Multi-agent system research has come up with approaches for an organization-oriented comprehension of software systems. However, when it comes to the collective level of organizational analysis, multi-agent system technology lacks clear development concepts. To overcome this problem while preserving the earnings of the agent-oriented approach, this paper propagates a shift in perspective from the individual agent to the organization as the core metaphor of software engineering targeting at very large systems. According to different levels of analysis drawn from organization theory, different types of organizational units are incorporated into a reference architecture for organization-oriented software systems. Title: SOFTWARE MODEL CHECKING FOR INTERNET PROTOCOLS WITH JAVA PATHFINDER Author(s): Jesús Martínez and Cristóbal Jiménez Abstract: Java is one of the most popular languages used to build complex and distributed systems. The existence of high-level libraries and middleware makes it now easy to develop applications for enterprise information systems. Unfortunately, implementing distributed software is always an error-prone task. Thus, middleware and application protocols must guarantee different functional and non-functional properties, which has been the field usually covered by tools based on formal methods. However, analyzing software is still a huge challenge for these tools, and only a few can deal with software complexity. One such tool is the Java Pathfinder model checker (JPF). This paper presents a new approach to the verification of Java systems which communicate through Internet Sockets. Our approach assumes that almost all the middleware and network libraries used in Java rely on the protocols available at the TCP/IP transport layer. Therefore, we have extended JPF, now allowing developers to verify not only single multi-threaded programs but also fully distributed Socket-based software. Title: COMPARING METHODOLOGIES FOR SERVICE-ORIENTATION USING THE GENERIC SYSTEM DEVELOPMENT PROCESS Author(s): Linda Terlouw Abstract: Enterprises use Service Oriented Architecture (SOA) and Service Oriented Design (SoD) as a means to achieve better business-IT alignment and a more flexible IT environment. Definitions on both notions are often not clear or even contradict. This paper instantiates the Generic System Development Process (GSDP) for service-orientation. Based on the terminology of this process we compare several methodologies on their completeness and on the amount of detail in which they describe their process steps. Methodologies that focus on (nearly) the whole service-oriented development process are Papazoglou's and van den Heuvel's service-oriented design methodology, SOMA, and SOAF. More specialized methodologies are the goal-driven approach, BCI3D, and Business Elements Analysis. (work in progress by Ph.D. student) Title: INTEGRATING FORMAL APPROACHES AND SIMULATION TO IMPROVE RELIABILITY AND CORRECTNESS OF WEB SERVICES Author(s): George Eleftherakis and Ognen Paunovski Abstract: The emerging web service paradigm offers an innovative and practical platform for business to business collaboration and enterprise information systems integration. A methodology for modelling web service systems based on an incremental and iterative approach integrating formal techniques and simulation is presented. This disciplined approach focuses on improving the reliability and correctness of the system under development. Using X-machines as the core design technique it offers intuitive mapping of BPEL specification. At the same time it enforces continuous verification and testing of components throughout the process. Blending this formal approach with simulation it allows the informal verification of complex service compositions in cases where formal verification is impossible or impractical. The applicability of the methodology is practically demonstrated through a typical web service case study. Title: MODELING WITH SERVICE DEPENDENCY DIAGRAMS Author(s): Lawrence Cabac, Ragna Dirkner and Daniel Moldt Abstract: This paper describes the usage of component diagram like models for the analysis and design of dependencies in multi-agent systems. As in other software paradigms also in multi-agent-based applications there exist dependencies between offered and required services, respectively the agents that offer or require those services. In simple settings it seems superfluous to model or analyze those dependencies explicitly because they are obvious. In complex settings, however, these dependencies can grow rather confusingly big and can cause misunderstandings among the developers of the system. Here it is useful to achieve a visualization of those dependencies by analyzing the given multi-agent application and displaying these in a diagram. The diagram gives a clear illustration of the overall structure of the system and therefore forms a basis for the discussion of the architecture. In addition, the diagram may be used for the documentation of the system. A dependency diagram technique together with a tool integration is presented in this paper. Title: AN ASPECT FOR DESIGN BY CONTRACT IN JAVA Author(s): Sérgio Agostinho, Pedro Guerreiro and Hugo Taborda Abstract: Several techniques exist for introducing Design by Contract in languages providing no direct support for it, such as Java. One such technique uses aspects that introduce preconditions and postconditions by means of before and after advices. For using this, programmers must be knowledgeable of the aspect language, even if they would rather concentrate on Design by Contract alone. On the other hand, we can use aspects to weave in preconditions, preconditions and invariants that will have been programmed in the source language, as regu-lar Boolean functions. In doing this, we must find ways to automatically “in-herit” preconditions and postconditions when redefining functions in subclasses and we must be able to record the initial state of the object when calling a modifier, so that it can be observed in the postconditions. With such a system, during development, the program will be compiled together with the aspects providing the Design by Contract facilities, using the compiler for the aspect language, and the running program will automatically check all the weaved in assertions, raising an exception when they evaluate to false. For the release build, it suffices to compile using the source language compiler, ignoring the aspects, and the assertions will be left out. Title: AN APPROACH FOR THE SPECIFICATION AND THE VERIFICATION OF MULTI-AGENT SYSTEMS INTERACTION PROTOCOLS USING AUML AND EVENT B Author(s): Leila Jemni Ben Ayed and Fatma Siala Abstract: This paper suggets an approach for the specication and the verication of interaction protocols in multi-agent systems. This approach is based on Agent Unied Modelling Language (AUML) and the Event B method. The interaction protocols, are initially modeled using the AUML protocol diagram which gives graphical and comprehensive model. The resulting model is then translated into Event B and enriched which required interaction protocols properties. We obtain a complete requirement specication in Event B which can be veried using the B powerful support tool like the B4free. In this paper, we focus on the translation process of AUML protocol diagrams into Event B and by an example of multi-agent systems interaction protocol, we illustrate our approach. Title: A CASE STUDY IN INTEGRATED QUALITY ASSURANCE FOR PERFORMANCE MANAGEMENT SYSTEMS Author(s): Liam Peyton, Bo Zhan and Bernard Stepien Abstract: On-line enterprise applications that are used by thousands of geographically dispersed users present special challenges for quality assurance. A case study of a hospital performance management system is used to investigate the specific issues of architectural complexity, dynamic change, and access to sensitive data in test environments. A quality assurance framework is proposed that integrates with and leverages the performance management system itself. As well, a data generation tool suitable for the requirements of testing performance management systems has been built that addresses limitations in similar commercially available tools. Workshop on Security in Information Systems Title: KEY ESTABLISHMENT ALGORITHMS FOR SOME DETERMINISTIC KEY PREDISTRIBUTION SCHEMES Author(s): Sushmita Ruj and Bimal Roy Abstract: Key establishment is a major problem in sensor networks because of resource constraints. Several key predistribution schemes have been discussed in literature. Though the key predistribution algorithms have been described very well in these papers, no key establishment algorithm has been presented in some of them. Without efficient key establishment algorithm the key predistribution schemes are incomplete. We present efficient shared-key discovery algorithms for some known deterministic key predistribution schemes. Our algorithms run in $O(1)$ or $O(\sqrt[3] N)$ time and the communication overhead is at most $O(log \sqrt{N})$ bits, where $N$ is the size of the network. The efficient key establishment schemes make deterministic key predistribution an attractive option over randomized schemes. Title: FIREWALL RULE SET INCONSISTENCY CHARACTERIZATION BY CLUSTERING Author(s): Sergio Pozo, Rafael Ceballos and Rafael M. Gasca Abstract: Firewalls provide the first line of defence of nearly all networked institutions today. However, Firewall ACLs could have inconsistencies, allowing traffic that should be denied or viceversa. In this paper, we analyze the inconsistency characterization problem as a separate problem of the diagnosis one, and propose definitions in order to characterize one-to-many inconsistencies. We identify the combinatorial part of the problem that generates exponential complexities in combined diagnosis and characterization algorithms proposed by other authors. Then we propose a decomposition of the combinatorial prob-lem in several smaller combinatorial ones, which effectively reduce the com-plexity of the problem. Finally, we propose an approximate heuristic and algo-rithms to solve the problem in worst case polynomial time. Although many al-gorithms have been proposed to address this problem, the presented ones are an improvement over them, since the time complexity of the full diagnosis process is polynomial. There are no constraints on how rule field ranges are expressed. Title: TRUST-AWARE ANONYMOUS AND EFFICIENT ROUTING FOR MOBILE AD-HOC NETWORKS Author(s): Min-Hua Shao and Shin-Jia Huang Abstract: Anonymous routing is a value-added technique used in mobile ad hoc networks for the purposes of security and privacy concerns. It has inspired lot of research interest, but very few measures exist to trust-integrated cooperation for reliable routing. This paper proposes an optimistic routing protocol for the betterment of collaborative trust-based anonymous routing in MANET. The key features of our scheme are including of accomplishment of anonymity-related goals, trust-aware anonymous routing, effective pseudonym management and lightweight overhead in computation, communication and storage. Title: PROCESS MODELING FOR PRIVACY - CONFORMANT BIOBANKING: CASE STUDIES ON MODELING IN UMLSEC Author(s): Ralph Herkenhöner Abstract: The continuing progress in research on human genetics is highly increasing the demand on large surveys of voluntary donors data and biospecimens. By this new dimension of acquiring and providing data and biospecimens, a new quality of biobanking arose. Using automated data and biospecimens handling along with modern communication channels---such as the world wide web---assigns new challenges to protection of donors privacy . Within current discussions on privacy and data protection an emerging result is the need of auditing privacy and data protection within biobanks. For this purpose, finding a proper way for describing biobanks in terms of a data protection audit is a vital issue. This paper presents how modeling in UMLsec can improve the description of biobanks with the objective of performing a data protection audit. It demonstrates the use of UMLsec for describing security characteristics regarding data protection issues on the basis of two case studies. Title: NEW ATTACK STRATEGY FOR THE SHRINKING GENERATOR Author(s): Pino Caballero-Gil, Amparo Fúster-Sabater and M. Eugenia Pazo-Robles Abstract: This work shows that the cryptanalysis of the shrinking generator requires fewer intercepted bits than what indicated by the linear complexity. Indeed, whereas the linear complexity of shrunken sequences is between A·2^{(S−2)} and A·2^{(S−1)}, we claim that the initial states of both component registers are easily computed with fewer than A·S shrunken bits. Such a result is proven thanks to the definition of shrunken sequences as interleaved sequences. Consequently, it is conjectured that this statement can be extended to all interleaved sequences. Furthermore, this paper confirms that certain bits of the interleaved sequences have a greater strategic importance than others, which must be considered as a proof of weakness of interleaved generators. Title: AN ACCESS CONTROL MODEL FOR LOCATION BASED SERVICES Author(s): Cameron Ross Dunne, Thibault Candebat and David Gray Abstract: In this paper we propose an access control model for use by a trusted middleware infrastructure, which is part of an architecture that supports the operation of Location Based Services (LBSs) over the Internet. This access control model provides users with increased security, and particularly privacy, by enabling them to create two different types of permissions based on how their location information is being used. These permissions specify which users and LBSs are entitled to obtain location information about which other users, under what circumstances the location information is released to the users and LBSs, and the accuracy of any location information that is released to the users and LBSs. Title: EFFECTIVENESS OF TRUST REASONING FOR ATTACK DETECTION IN OLSR Author(s): Asmaa Adnane, Christophe Bidan and Rafael T. de Sousa Jr Abstract: Previous works [2, 3, 15] have proposed to check information consistency and to detect misbehavior nodes for the OLSR protocol based on semantic and trust properties. The basic idea is that each node uses only local observations to detect attacks without having to collaborate with other nodes. The objective of this paper is to prove the effectiveness of such approaches by presenting simulation results. Title: A MULTI-DIMENSIONAL CLASSIFICATION FOR USERS OF SECURITY PATTERNS Author(s): Michael VanHilst, Eduardo B. Fernandez and Fabrício Braz Abstract: This paper presents a classification for security patterns that addresses the needs of users. The approach uses a matrix defined by dividing the problem space along multiple dimensions, and allows patterns to occupy regions, defined my multiple cells in the matrix. It supports filtering for narrow or wide pattern selection, allows navigation along related axes of concern, and identifies gaps in the problem space that lack pattern coverage. Results are preliminary but highlight differences with existing classifications. Title: IMPROVING LEAST PRIVILEGE IN SOFTWARE ARCHITECTURE BY GUIDED AUTOMATED COMPARTMENTALIZATION Author(s): Koen Buyens, Bart De Win and Wouter Joosen Abstract: Security principles, like least privilege, are among the resources in the body of knowledge for security that survived the test of time. Support for these principles at architectural level is limited, as there are no systematic rules on how to apply the principle in practice. As a result, these principles are often neglected in practice since it requires a lot of effort to apply them consistently. This paper addresses this gap for the principle of least privilege in software architecture by elicitating architectural transformations that positively impact the least properties of the architecture, while preserving the semantics thereof. These transformations have been implemented in a tool and validated by means of a case study. Title: CONCEPTUAL DESIGN OF A METHOD TO SUPPORT IS SECURITY INVESTMENT DECISIONS WITHIN THE CONTEXT OF CRITICAL BUSINESS PROCESSES Author(s): Heinz Lothar Grob, Gereon Strauch and Jan Hermans Abstract: Information Systems are part and parcel of critical infrastructures. In order to safeguard the compliance of information systems, private enterprises and governmental organizations can implement a large variety of distinct measures, ranging from technical measures to organizational measures. Especially in the context of critical information system infrastructure e.g. data centers the decision for specific safeguards is complex. An appropriate method for the profitability assessment of alternative IS security measures in the context of critical business processes has not so far been developed. With this article we propose a conceptual design for a method which enables the determination of the success of alternative security investments on the basis of a process-oriented perspective. Within a design science approach we combine established artifacts of the field of IS security management with those of the field of process management and controlling. On that basis we develop a concept that allows decision-makers to prioritize the investments for dedicated IS safeguards in the context of critical business processes. Title: KNOWLEDGE EXTRACTION AND MANAGEMENT FOR INSIDER THREAT MITIGATION Author(s): Qutaibah Althebyan and Brajendra Panda Abstract: This paper presents a model for insider threat mitigation. While many of the existing insider threat models concentrate on watching insiders’ activities for any misbehavior, we believe that considering the insider himself/herself as a basic entity before looking into his/her activities will be more effective. In this paper, we presented an approach that relies on ontology to extract knowledge from an object. This represents expected knowledge that an insider might gain by accessing that object. We then utilized this information to build a model for insider threat mitigation which ensures that only knowledge units that are related to the insider’s domain of access or his/her assigned tasks will be allowed to be accessed by such insiders. Title: ADAPTIVE FILTERING OF COMMENT SPAM IN MULTI-USER FORUMS AND BLOGS Author(s): Marco Ramilli and Marco Prandini Abstract: The influence of web-based user-interaction platforms, like forums, wikis and blogs, has extended its reach into the business sphere, where comments about products and companies can affect corporate values. Thus, guaranteeing the authenticity of the published data has become very important. In fact, these platforms have quickly become the target of attacks aiming at injecting false comments. This phenomenon is worrisome only when implemented by automated tools, which are able to massively influence the average tenor of comments. The research activity illustrated in this paper aims to devise a method to detect automatically-generated comments and filter them out. The proposed solution is completely server-based, for enhanced compatibility and user-friendliness. The core component leverages the flexibility of logic programming for building the knowledge base in a way that allows continuous, mostly unsupervised, learning of the rules used to classify comments for determining whether a comment is acceptable or not. Title: NETWORK ACCESS CONTROL INTEROPERATION USING SEMANTIC WEB TECHNIQUES Author(s): William Fitzgerald, Simon Foley and Mícheál Ó. Foghlú Abstract: Network Access Control (NAC) requirements are typically implemented in practice as a series of heterogeneous security-mechanism-centric policies that span system services and application domains. For example, a NAC policy might be configured in terms of firewall, proxy, intrusion prevention and user-access policies. While defined separately, these policies may interoperate in the sense that the access requirements of one may conflict and/or be redundant with respect to the access requirements of another policy. Thus, managing a large number of distinct policies becomes a major challenge in terms of deploying and maintaining a meaningful and consistent configuration. It is argued that the Semantic Web—an architecture that supports the formal representation, reasoning and sharing of heterogeneous domain knowledge—provides a natural solution to this challenge. A risk-based approach to configuring interoperating policies is described. Each NAC mechanism has an ontology that is used to represent its configuration. This heterogeneous and interoperating policy knowledge is unified with higher-level business (risk) rules, providing a single (extensible) ontology that supports reasoning across the different NAC policy configurations. Title: AN ONTOLOGY-BASED FRAMEWORK FOR MODELLING SECURITY REQUIREMENTS Author(s): Joaquín Lasheras, Rafael Valencia-García, Jesualdo Tomás Fernández-Breis and Ambrosio Toval Abstract: In the last years, security in Information Systems (IS) has become an important issue, so that it has to be taken into account in all the stages of IS development, including the early phase of Requirements Engineering (RE). One of the most helpful RE strategies for improving the productivity and quality of software process and products is the reuse of requirements, and this can be facilitated by Semantic Web technologies. In this work, we describe a novel ontology-based framework for representing and reusing security requirements based on risk analysis. A risk analysis ontology and a requirement ontology have been developed and combined to represent formally reusable security requirements and improve security in IS, detecting incompleteness and inconsistency in requirements and achieving semantic processing in requirements analysis. These ontologies have been developed according to a formal method to build and compare ontologies and with a standard language, OWL. This framework will be the basis to elaborate a “lightweight” method to elicit security requirements. Title: INTEGRATING PRIVACY POLICIES INTO BUSINESS PROCESSES Author(s): Michele Chinosi and Alberto Trombetta Abstract: The increased interest around business processes management and modeling techniques has brought many organizations to make signiﬁcant investments in business process modeling projects. One of the most recent proposal for a new business process modeling technique is the Business Process Modeling Notation (BPMN). Often, the modeled business processes involve sensible information whose disclosure is usually regulated by privacy policies. As such, the interaction between business processes and privacy policies is a critical issue worth to be investigated. Towards this end, we introduce a data model for BPMN and a corresponding XML-based representation (called BPeX) which we use to check whether a BPeX-represented business process is compliant with a P3P privacy policy. Our checking procedures are very efﬁcient and require standard XML technology, such as XPath. Title: OBTAINING SECURE CODE IN SQL SERVER ANALYSIS SERVICES BY USING MDA AND QVT Author(s): Carlos Blanco,Ignacio García-Rodríguez de Guzmán,Eduardo Fernández-Medina, Juan Trujillo and Mario Piattini Abstract: DataWarehouses manage historical information for the decision making process that could be found out by unauthorized users when no establish security constraints are established. Therefore, it is very important for OLAP tools to consider the security rules defined at early stages of the development lifecycle. Following the MDA approach we have created an architecture for developing secure Data Warehouses and in this paper we complete this architecture obtaining secure multidimensional code in SQL Server Analysis Services from our secure multidimensional conceptual model (SECDW) by using QVT transformations. We focus on automatically obtain code for the security constraints defined at upper abstraction levels. Workshop on Natural Language Processing and Cognitive Science Title: CONCEPTUAL METAPHOR AND SCRIPTS IN RECOGNIZING TEXTUAL ENTAILMENT Author(s): William R. Murray Abstract: The power and pervasiveness of conceptual metaphor can be harnessed to expand the class of textual entailments that can be performed in the Recognizing Textual Entailment (RTE) task and thus improve our ability to understand human language and make the kind of textual inferences that people do. RTE is a key component for question understanding and discourse understanding. Although extensive lexicons, such as WordNet, can capture some word senses of conventionalized metaphors, a more general capability is needed to handle the considerable richness of lexical meaning based on metaphoric extensions that is found in common news articles, where writers routinely employ and extend conventional metaphors. We propose adding to RTE systems an ability to recognize a library of common conceptual metaphors, along with scripts. The role of the scripts is to allow entailments from the source to the target domain in the metaphor by describing activities in the source domain that map onto elements of the target domain. An example is the progress of an activity, such as a career or relationship, as measured by the successful or unsuccessful activities in a journey towards its destination. In particular we look at two conceptual metaphors: IDEAS AS PHYSICAL OBJECTS, which is part of the Conduit Metaphor of Communication, and ABSTRACT ACTIVITIES AS JOURNEYS. The first allows inferences that apply to physical objects to (partially) apply to ideas and communication acts (e.g., “he lobbed jibes to the comedian”). The second allows the progress of an abstract activity to be assessed by comparing it to a journey (e.g., “his career was derailed”). We provide a proof of concept where axioms for actions on physical objects, and axioms for how physical objects behave compared to communication objects, are combined to make correct RTE inferences in Prover9 for ex-ample text-hypothesis pairs. Similarly, axioms describing different states in a journey are used to infer the current progress of an activity, such as whether it is succeeding (e.g., “steaming ahead”), in trouble (e.g., “off course”), recovering (e.g., “back on track”), or irrevocably failed (e.g., “hijacked”). Title: SUMMARIZING REPORTS ON EVOLVING EVENTS - PART II: NON-LINEAR EVOLUTION Author(s): Stergos D. Afantenos Abstract: A great portion of news articles report on events that evolve by emitting various reports during the course of the event's evolution. This paper focuses on the task of summarizing such reports. After discussing the nature of evolving events (dividing them into linearly and non-linearly evolving events), we present a methodology for the creation of summaries from reports of such events by multiple sources. At the core of this methodology lies the notion of Synchronic and Diachronic Relations (SDRs) whose aim is the identification of similarities and differences across documents. SDRs do not connect textual elements inside the text but some structures that we call messages. We present the application of our methodology via a case study on the topic of terroristic incidents that involve hostages. Title: INTENDED BOUNDARIES DETECTION IN TOPIC CHANGE TRACKING FOR TEXT SEGMENTATION Author(s): Alexandre Labadié and Violaine Prince Abstract: This paper propose a topical text segmentation method based on intended boundaries detection and compare it to a well known default boundaries detection method, c99.We ran the two methods on a corpus of twenty two French political discourses and results showed us that intended boundaries detection is better than default boundaries detection on well structured text. Title: EMULATION OF HUMAN SENTENCE PROCESSING USING AN AUTOMATIC DEPENDENCY SHIFT-REDUCE PARSER Author(s): Atanas Chanev Abstract: The methods of NLP and Cognitive Science can complement each other for the design of better models of the human sentence processing mechanism, on the one hand, and developing better natural language parsers, on the other. In this paper, we show the performance of an automatic parser consistent with the architecture of the human parser of (Chanev, 2007) on various sentence processing experimental materials. Moreover, we use a linking hypothesis based on surprisal (Levy, 2008) to explain human reaction time patterns. Although our results are generally not consistent with the human performance, our emulations contribute to understanding the architecture of the human parser and its disambiguation strategies better. We also suggest that these strategies may possibly be used for improving the performance of automatic parsing. Title: A TEXT SUMMARIZATION APPROACH UNDER THE INFLUENCE OF TEXTUAL ENTAILMENT Author(s): Elena Lloret, Óscar Ferrández, Rafael Muñoz and Manuel Palomar Abstract: This paper presents how text summarization can be influenced by textual entailment. We show that if we use textual entailment recognition together with text summarization approach, we achieve good results for final summaries, obtaining an improvement of 6.78% with respect to the summarization approach only. We also compare the performance of this combined approach to two baselines (the one provided in DUC 2002 and ours based on word-frequency technique) and we discuss the preliminary results obtained in order to infer conclusions that can be useful for future research. Title: BUILDING A RECOMMENDER SYSTEM USING COMMUNITY LEVEL SOCIAL FILTERING Author(s): Alexandra Balahur and Andrés Montoyo Abstract: Finding the "perfect" product among the dozens of prod- ucts available on the market is a difficult task for any person. One has to balance between personal needs and tastes, financial limitations, latest trends and social assessment of products. On the other hand, designing the "perfect" product for a given category of users is a difficult task for any company, involving extensive market studies and complex analysis. This paper presents a method to gather the attributes that make up the "perfect" product within a given category and for a specified community. The system built employing this method can be useful for two purposes: Firstly, it can recommend products to a user based on the similarity of the feature attributes that most users in his/her community see as important and positive for the product type and the products the user has to opt from. Secondly, it can be used as a practical feedback for companies as to what is valued and how, for a product, within a certain community. For the moment, we will consider the community level as being the country, and thus we will apply and compare the method proposed for English and Spanish. For each product class, we first automatically extract general features (characteristics describing any product, such as price, size, and design), for each product we then extract specific features (as picture resolution in the case of a digital camera) and feature attributes (adjectives grading the characteristics, as modern or faddy for design). Further on, we use "social filtering" to automatically assign a polarity (positive or negative) to each of the feature attributes, by using a corpus of "pros and cons"-style customer reviews. Additional feature attributes are classified depending on the previously assigned polarities using Support Vector Machines Sequential Minimal Optimization machine learning with the Normalized Google Distance. Finally, recommendations are made by computing the cosine similarity between the vector representing the "perfect" product and the vectors corresponding to products a user could choose from. Title: NATURAL LANGUAGE INTERFACES TO DATABASES: SIMPLE TIPS TOWARDS USABILITY Author(s): Luísa Coheur, Ana Guimarães and Nuno Mamede Abstract: Natural Language Interfaces to Databases can be an easy way to obtain information: the user simply has to write a question in his/her own language to get the desired answer. Nevertheless, these kind of applications also present some problems. Many of those arise from the fact that who develops the interface does it according with his/her own idea of usability, which is sometimes far from the real interaction the interface will have to support; but even when a question is syntactically supported, it can be misunderstood and a wrong answer can be provided to the user. In this paper we present some simple tips that intend to minimize these situations. Moreover, we evaluate the importance of presenting examples of successful and unsuccessful questions to the user. As a proof of concept we implemented JaTeDigo, a natural language interface with a cinema database that follows these ideas. Title: SEMANTIC NAVIGATION OF NEWS Author(s): Walter Kasper, Jörg Steffen and Yajing Zhang Abstract: We present a system for navigating news that is based on semantic similarity of documents. News documents get automatically annotated semantically using information extraction. Annotations are displayed to a user who can easily retrieve crosslingual semantically related documents by selecting interesting items. Title: THE ROLE OF AN ABSTRACT ONTOLOGY IN THE COMPUTATIONAL INTERPRETATION OF CREATIVE CROSS-MODAL METAPHOR Author(s): Sylvia Weber Russell Abstract: Various approaches to computational metaphor interpretation are based on pre-existing similarities between source and target domains and/or are based on metaphors already observed to be prevalent in the language. This paper addresses similarity-creating cross-modal metaphoric expressions. The described approach depends on the imposition of abstract ontological components, which represent source concepts, onto target concepts. The challenge of such a system is to represent both denotative and connotative components which are extensible, together with a framework of all general domains between which such extensions can conceivably occur. An existing ontology of this kind is outlined. It is suggested that the use of such an ontology is well adapted to the interpretation of both conventional and unconventional metaphor that is similarity-creating. Title: LIMITS OF LEXICAL SEMANTIC RELATEDNESS WITH ONTOLOGY-BASED CONCEPTUAL VECTORS Author(s): Lim Lian Tze and Didier Schwab Abstract: Human-computer interactions are at the heart of many Natural Language Processing applications, including message planning, information retrieval and interactive machine translation. In such systems, it is crucial to ensure a satisfactory user experience by providing results that seem adequately “human”. The Miller-Charles benchmark dataset was compiled, so that machine-computed semantic similarity measures of word pairs may be compared to that of human judgement. We use conceptual vectors as a formalism for the representation of thematic aspects of text segments, together with appropriate definitions of distances and measures to allow the computation of semantic relatedness. In this paper, we study the behavior of conceptual vectors based on an ontology by comparing the results to the Miller-Charles benchmark, and examine the limits to such an approach due to mapping. We also discuss the viability of the Miller-Charles dataset as a benchmark for assessing lexical semantic relatedness. Title: LMF STANDARDIZED MODEL FOR THE EDITORIAL ELECTRONIC DICTIONARIES OF ARABIC Author(s): Feten Baccar Ben Amar, Aїda Khemakhem, Bilel Gargouri, Kais Haddar and Abdelmajid Ben Hamadou Abstract: This paper is interested in the development of the Arabic electronic dictionaries of human use (editorial use). It proposes a unified and standardized model for these dictionaries according to the future standard LMF (Lexical Markup Framework) ISO 24613. Thanks to its subtle and standardized structure, this model allows the development of extendable dictionaries on which generic interrogation functions adapted to the user’s needs can be implemented. This model has already been carried out on some existing Arabic dictionaries using the ADIQTQ (Arabic DIctionary Query Tool) system, which we developed for the generic interrogation of standardized dictionaries of Arabic. Title: THE ROLE OF ATTENTION IN UNDERSTANDING SPATIAL EXPRESSIONS UNDER THE DISTRACTOR CONDITION Author(s): Tatsumi Kobayashi, Asuka Terai and Takenobu Tokunaga Abstract: To develop a computational model of understanding spatial expressions, various factors should be taken into account. We have been exploring the relations between the goodness-of-fit of spatial terms and various geometric factors such as the object size, the distance between objects and the observers’ viewpoint. Although the dual-object relation between the located and reference objects can be handled with relatively simple models, introducing a distractor object requires further factors such as attention to the objects. Based on our experiment using Japanese topological and projective terms, this paper proposes a computational model to estimate the goodness-of-fit of spatial terms which incorporates an attention model of a distractor. The proposed model was evaluated by using our experimental data. Title: A PRELIMINARY STUDY ON INDUCING LEXICAL CONCRETENESS FROM DICTIONARY DEFINITIONS Author(s): Oi Yee Kwong Abstract: While the distinction between concrete words and abstract words appears to be inherent, the measure of lexical concreteness relying on human ratings is more intuitive than objective. In this study, we aim at extending the concreteness distinction from the lexical level to the sense level, and inducing a numerical index of concreteness for individual senses and words from dictionary definitions. The high overall agreement between human ratings and definition-induced ratings is encouraging for us to further simulate the distinction from more language resources. Such a simulated index for concreteness is believed to inform not only lexicography but also natural language processing tasks like automatic word sense disambiguation. Title: ONTOLOGY-DRIVEN VACCINATION INFORMATION EXTRACTION Author(s): Liliana Ferreira, António Teixeira and João Paulo Silva Cunha Abstract: Increasingly, medical institutions have access to clinical information through computers. The need to process and manage the large amount of data is motivating the recent interest in semantic approaches. Data regarding vaccination records is a common in such systems. Also, being vaccination is a major area of concern in health policies, a lot of information is available in the form of clinical guidelines. However, the information in these guidelines may be difficult to access and apply to a specific patient during consultation. The creation of computer interpretable representations allows the development of clinical decision support systems, improving patient care with the reduction of medical errors, increased safety and satisfaction. This paper describes the method used to model and populate a vaccination ontology and the system which recognizes vaccination information on medical texts.The system identifies relevant entities on medical texts and populates an ontology with new instances of classes. An approach to automatically extract information regarding inter-class relationships using association rule mining is suggested. Title: TRACKER TEXT SEGMENTATION APPROACH: INTEGRATING COMPLEX LEXICAL AND CONVERSATION CUE FEATURES Author(s): C. Chibelushi and B. Sharp Abstract: Abstract. While text segmentation is a topic which has received a great atten-tion since 9/11, most of current research projects remain focused on expository texts, stories and broadcast news. Current segmentation methods are well suited for written and structured texts making use of their distinctive macro-level structures. Text segmentation of transcribed multi-party conversation presents a different challenge given the lack of linguistic features such as headings, para-graph, and well formed sentences. This paper describes an algorithm suited for transcribed meeting conversations combining semantically complex lexical rela-tions with conversational cue phrases to build lexical chains in determining topic boundaries. Title: VERBAL FLUENCY, OR HOW TO STAY ON TOP OF THE WAVE? Author(s): Michael Zock and Stergos D. Afantenos Abstract: Speaking a language and achieving in another one is a highly complex process which requires the acquisition of various kinds of knowledge and skills, like the learning of words, rules and patterns and their connection to communicative goals (intentions), the usual starting point. To help the learner to acquire these skills we propose an enhanced, electronic version of an age old method: pattern drills. While being highly regarded in the fifties, PDs have become unpopular since then, partially because of their lack of grounding (natural context) and rigidity. Despite these shortcomings we do believe in the virtues of this approach, at least with regard to the acquisition of basic linguistic reflexes or skills (automatisms), necessary to survive in the new language. Of course, the method needs improvement, and we will show here how this can be achieved. Unlike tapes or books, computers are open media, allowing for dynamic changes, taking users' performances and preferences into account. Building an electronic version of PDs amounts to building an open resource, accomodatable to the users' ever changing needs. Title: TOWARDS A FRAMEWORK FOR INTEGRATED NATURAL LANGUAGE PROCESSING ARCHITECTURES FOR SOCIAL ROBOTS Author(s): Matthias Scheutz and Kathleen Eberhard Abstract: Current social robots lack the natural language capacities to be able to interact with humans in natural ways. In this paper, we review some of the psycholinguistic evidence for particular styles of spoken language interactions and the implications for language processing on robots. Then we present results from human experiments intended to isolate spoken interaction types in a search and rescue task. We discuss implications for NLP architectures for embodied situated agents (like robots) and briefly sketch our ongoing efforts to develop an architecture that exhibits some of the desired properties. Title: STUDYING HUMAN TRANSLATION BEHAVIOR WITH USER-ACTIVITY DATA Author(s): Michael Carl, Arnt Lykke Jakobsen and Kristian T.H. Jensen Abstract: The paper introduces a new research strategy for the investigation of human translation behavior. While conventional cognitive research methods make use of think aloud protocols (TAP), we introduce and investigate User-Activity Data (UAD). UAD consists of the translator's recorded keystroke and eye-movement behavior which makes it possible to replay a translation session and to register the subjects' comments on their own behavior during a retrospective interview. UAD has the advantage of being objective and reproducable, and, in contrast to TAP, does not interfere with the translation process. The paper gives the background of this technique and an example for a particular English-to-Danish translation problem. Our goal is to elaborate and investigate cognitively grounded basic translation concepts which are materialised and traceable in the UAD and which, in a later stage, will provide the basis for appropriate and targeted help for the translator in a given moment. Workshop on Ubiquitous Computing Title: USING GEOFENCING AS A MEANS TO SUPPORT FLEXIBLE REAL TIME APPLICATIONS FOR DELIVERY SERVICES Author(s): Georg Schneider, Björn Dreher and Ole Seidel Abstract: Delivery services are an industry sector that may benefit extremely by the availability of mobile devices and connectivity. Today the support and tight integration of mobile users into the company processes is finally possible using off-the-shelf components. This paper presents the concept and implementation of an application, which is targeted to the domain of delivery services. The complete process, starting from route planning through navigation to delivery confirmation can be supported. The concept of GeoFencing is used to automatically detect different states in the execution of the delivery process in order to trigger context adapted actions like navigation close to target points and delivery confirmation. The system is realized as a GPS-based windows mobile application using a conventional consumer PDA. Title: ADAPTIVE SENSING BASED ON FUZZY SYSTEM FOR WIRELESS SENSOR NETWORKS Author(s): Romeo Mark A. Mateo, Young-Seok Lee, Hyunho Yang, Sung-Hyun Ko and Jaewan Lee Abstract: Wireless sensor networks (WSN) are used by various application areas to implement smart data processing and ubiquitous system. In the recent research of parking management system based on wireless sensor networks, adaptive sensing of WSN is not considered where the effective implementation of these distributed computing devices affects the performance of the over-all reliability of the parking management system. This paper proposes an adaptive sensing using the proposed fuzzy wireless sensor implemented in the ubiquitous parking management system. The fuzzy inference system is encoded in the sensor devices for efficient car presence detection. A rule base adaptive module is used to change the fuzzy set values from wireless sensors based on the rule patterns specified by an expert. The prototype implementation of the proposed fuzzy wireless sensors is done in a ubiquitous parking management system simulator. Title: AN INTELLIGENT SYSTEM FOR DISTRIBUTED PATIENT MONITORING AND CARE GIVING Author(s): Alessandra Esposito, Luciano Tarricone, Marco Zappatore and Luca Catarinucci Abstract: This paper presents a proposal for a context-aware pervasive framework. The framework follows a general purpose architecture, centred around an ontological context representation. The ontology provides the vocabulary upon which software agents communicate and perform rule-based reasoning, in order to assess the system reaction to context changes. The system components and their coordinated operations are described by providing a simple example of concrete application in a home-care scenario. Workshop on Model-Driven Enterprise Information Systems Title: A FRAMEWORK TO COMBINE MDA AND ONTOLOGY LEARNING Author(s): Regina C. Cantele, Maria Alice G. V. Ferreira, Diana F. Adamatti and Moyses Araujo Abstract: This paper proposes to join two different approaches: Software Engineering and Semantic Web. The first one cames from Model Driven Architecture (MDA) and the second one, from Ontology Engineering area, more specifically Ontology Learning. The idea is to accelerate the construction of ontologies from knowledge already represented, based on standards of MDA for interoperability. A framework is proposed to join these concepts, where we defining the requirements and the steps sequence to apply it. Title: TRANSFORMING INTERNAL ACTIVITIES OF BUSINESS PROCESS MODELS TO SERVICE COMPOSITIONS Author(s): Teduh Dirgahayu, Dick Quartel and Marten van Sinderen Abstract: As a service composition language, BPEL imposes as constraint that a business process model should consist only of activities for interacting with other business processes. BPEL provides limited support for implementing internal activities, i.e. activities that are performed by a single business process without involvement of other business processes. BPEL is hence not suitable to implement internal activities for complex data manipulation. There are a number of options to make BPEL able to implement any kinds of internal activities. In this paper we analyse those options based on their feasibility, efficiency, reusability, portability and merging. The analysis indicates that delegating internal activities’ functionality to other services is the best option. We therefore present an approach for transforming internal activities to service invocations. The application of this approach on a business process model results in a service composition model that consists of activities for interaction only. Title: INTERACTION PATTERNS FOR REFINING BEHAVIOUR SPECIFICATIONS OF CONTEXT-AWARE MOBILE SERVICES Author(s): Laura Daniele, Luís Ferreira Pires and Marten van Sinderen Abstract: In the context of Model-Driven Architecture (MDA), little attention has been given to behavioural aspects of service design. This paper proposes a MDA-based approach that considers these aspects in the development of context-aware mobile services. Starting from the specification of the external observable behaviour of a service, we gradually refine this behaviour considering the internal structure of the service. Particularly, we present a structure that is general enough to be used for several context-aware mobile services. However, it may be configured based on the specific service to be developed. The last step of our approach consists of identifying sequences of interactions, which we call interaction patterns, that can be mapped into a behaviour model of the components that execute the service. This model is platform-independent and may be realized in terms of several specific target technologies. Workshop on Technologies for Context-Aware Business Process Management Title: CONTEXT-BASED CONFIGURATION OF PROCESS VARIANTS Author(s): Alena Hallerbach, Thomas Bauer and Manfred Reichert Abstract: When designing process-aware information systems, usually, variants of the same process type have to be defined and maintained. Each of these process variants constitutes an adjustment of the same process to specific requirements building the variant context. Current business process management tools do not support the context-based definition and configuration of such variants in an adequate manner. Instead, each process variant has to be defined from scratch and be kept in a separate model. This results in considerable redundancies when modeling and adapting process variants, and is also a time consuming and error-prone procedure. This paper presents a more flexible and context-based approach for configuring and managing process variants. In particular, we allow for the configuration of process variants by applying a context-dependent set of well-defined change operations to a base process. Workshop on RFID Technology - Concepts, Applications, Challenges Title: LOCATION TECHNIQUE BASED ON PATTERN RECOGNITION OF RADIO SIGNAL STRENGTH USING ACTIVE RFID Author(s): Romeo Mark A. Mateo, Insook Yoon and Jaewan Lee Abstract: RFID technology is used by various application areas to implement smart data processing and ubiquitous system. In the recent research of car parking system, implementing efficient and accurate location technique using active RFIDs are not considered. A framework for the ubiquitous car parking system is presented in this paper. This paper proposes a location scheme using pattern recognizer agent in ubiquitous networks of car parking system. The proposed pattern recognizer agent (PRA) based on multilayer perceptron (MLP) determines the pattern of radio signal for accurate location technique. The procedure provides a training model for received signal strength (RSS) patterns in able to classify the signals and used for locating the specific slot of car. The experiment compared the proposed method to other accurate algorithms and found that MLP is more accurate classifier and time-efficient in building the classification model. Title: ID-SERVICES: AN RFID MIDDLEWARE ARCHITECTURE FOR MOBILE APPLICATIONS Author(s): Joachim Schwieren and Gottfried Vossen Abstract: The use of RFID middleware to support application development for and integration of RFID hardware into information systems has become quite common in RFID applications where reader devices remain stationary, which currently represents the largest part of all RFID applications in use. Another field for applying RFID technology offering a huge set of novel possibilities and applications are mobile applications, where readers are no longer fixed. In order to address the specific issues of mobile RFID-enabled applications and to support developers in rapid application development, we present the architecture of an RFID middleware for mobile applications. The approach has been used to implement MoVIS (Mobile Visitor Information System), an RFID-enabled mobile application which allows museum visitors to request individually adapted multimedia information about exhibits in an intuitive way. Title: HF RFID READERFOR MOUSE IDENTIFICATION - STUDY OF MAGNETIC COUPLING BETWEEN MULTI-ANTENNAS AND A FERRITE TRANSPONDER Author(s): C. Ripoll, P. Poulichet, E. Colin, C. Maréchal and A. Moretto Abstract: This paper depicts the optimization of a RFID system operating at 13.56 MHz that is used to recognize a mouse. When it passes near an antenna, the small transponder (1x6 mm2) placed in its body under the skin communicates with the antenna placed in the vicinity. The distance of communication is small (around 3 centimeters) and our objective was to increase this distance. The transponder receives the electrical supply transmitted from the emitter antenna and respond with a coded signal sent by an integrated circuit placed in the transponder. All the elements of the chain are taken into account in ADS simulation and we determine the value of the minimum voltage necessary for remote bias. Finite-element analysis is employed to determine the values of the magnetic field in the vicinity of the transmitting antenna. The paper shows how to correlate the influent parameters to increase the communication distance. To improve this reading rate, a novel differential receiving antenna has been designed that allows an improved decoupling with the very close transmitting antenna. Title: REFERENCE ARCHITECTURE FOR EVENT-DRIVEN RFID APPLICATIONS Author(s): Jürgen Dunkel and Ralf Bruns Abstract: RFID technology has been applied to a wide range of application areas, but usually it is not well integrated with business processes and backend enterprise information systems. Current enterprise information systems cannot deal with events in order to drive the business processes. Due to the high volume of events and their complex dependencies it is not possible to have a fixed/predefined process flow on the business level. In this paper we propose a reference architecture for event-driven RFID applications. The key concept of the approach is to use complex event processing (CEP) as the model for proc-essing and managing of event-driven information systems. The power of event-driven architecture originates in its ability both to separate knowledge from its implementation logic and to adapt without source code modification. In order to prove the practical usefulness of the reference architecture we present a case study for tracking samples in large research laboratories using RFID. Title: LOCATION THROUGH PROXIMITY IN RFID SYSTEMS Author(s): Daniele Biondo, Antonio Iera, Antonella Molinaro, Massimiliano Mattei and Sergio Polito Abstract: In this paper we introduce Location Estimation through Proximity Information (LEPI), an algorithm that aims at locating portable RFID readers in areas where active-RFID grids are settled. Location estimation is accomplished through a proximity information, which the reader derives by performing tag interrogation at increasing RF power levels. RFID tags surrounding the reader are incrementally detected and their known positions are eventually averaged, this providing an accurate estimation of the reader location. The performance of the roposed approach is assessed by experimental trials, conducted in indoor environments. They testify both to the actual feasibility of such a solution and to its better accuracy when compared to other reference RFID-based location techniques. Title: TOUCH & CONTROL: INTERACTING WITH SERVICES BY TOUCHING RFID TAGS Author(s): Iván Sánchez, Jukka Riekki and Mikko Pyykkönen Abstract: We suggest controlling local services using an NFC capable mobile phone and RFID tags placed in the local environment behind control icons. When a user touches a control icon, a control event is sent to the corresponding service. The service processes the event and performs the requested action. We present a platform realizing this control approach and a prototype service playing videos on a wall display. We compare this touch-based control with controlling the same service using the mobile phone's keypad. Title: ROUTING MECHANISM FOR SECURE, DISTRIBUTED DISCOVERY SERVICES FOR GLOBAL AUTO-ID NETWORKS Author(s): Jons-Tobias Wamhoff Eberhard Grummt and Ralf Ackermann Abstract: Enterprises capture immense amounts of data in Auto-ID repositories as items travel along the supply chain. To enable applications such as Track and Trace, repositories containing relevant information need to be discovered because they are distributed among individual partners. At the same time confidential information including supply relationships need to be protected. The task of identifying repositories by keeping the secrets of a supply chain is done by a Discovery Service. In this paper we present and compare hierarchical routing mechanisms for secure, distributed Discovery Services that mediate interaction between the requesting client and the Discovery Service. The discovery information can be spread over multiple nodes. The routing mechanism ensures that queries sent to any node within the Discovery Service hierarchy will be forwarded to all responsible nodes. By anonymizing the request and response propagation, the repositories and Discovery Service data nodes remain hidden as long as the request did not produce an authorized result set. Title: HOW RFID TECHNOLOGY CAN ASSIST THE VISUALLY IMPAIRED: THE SESAMONET SYSTEM Author(s): Marcello Marcello Barboni, Francesco Rizzo, Graziano Azzalin and Marco Sironi Abstract: The Project's objective is the development of an integrated system to increase mobility of people with disabilities and their personal safety and security, by identifying a secure path to walk through selected areas, particularly for people with visual disability. This is done through the use of mature and proven technologies (RFID, antennas, bluetooth, etc.) which only have to be integrated for this specific application.The system is based on 3 main components: a path made of transponders, a custom-designed walking cane and a smart phone. Each RFID tag is associated to a message or a small “beep”. The system describes the environment and warns the user if there is a potential danger such as a road crossing or a step. Title: TOPOGRAPHIC CONNECTIONIST UNSUPERVISED LEARNING FOR RFID BEHAVIOR DATA MINING Author(s): Guénäel Cabanes, Younès Bennani, Claire Chartagnat and Dominique Fresneau Abstract: The Radio Frequency Identification Device (RFID) is an advanced technology, among tracking methods, that can be used to study the behavior of animal societies. The aims of this work is to built a new RFID-based autonomous system for following the individual spatio-temporal activity, for which knowledge are not available today, and to develop new tools for automatic data mining. We study here how to transform this data into knowledge about division of labor and intra-colonial cooperation and conflict in an ants colony, by developing a new unsupervised learning data mining method (DS2L-SOM : Density-based Simultaneous Two-Level - Self Organizing Map) to find homogeneous clusters (i.e. sets of individual witch share a distinctive behavior). This method is very fast and efficient, and allow very useful visualization tools. Title: HOW NEW TECHNOLOGIES CAN IMPROVE COLD CHAIN MANAGEMENT? Author(s): Luigi Battezzati, Giovanni Miragliotta, Alessandro Perego and Angela Tumino Abstract: There is growing attention towards the technological solutions that can improve the cold chain management. This paper analyzes the pros and cons of four of the most interesting solutions (Data Loggers, Time Temperature Indicators, semi-passive RFId and Wireless Sensor Networks), paying a particular attention to the overall balance of costs and benefits. Thanks to this preliminary study, a solution based on Wireless Sensor Networks to better monitor the ice-cream supply chain of a prominent company – i.e. Nestlè Italy – has been designed and its impacts on the whole cold chain performances have been evaluated. Title: EVALUATION OF AUTOMATIC IDENTIFICATION TECHNOLOGIES (AUTO-ID) TO BE IMPLEMENTED IN THE HEALTH CARE SECTOR Author(s): Mike Krey Abstract: The tasks and objectives of automatic identification (Auto-ID) are to provide information on goods and products. It has already been established for years in the areas of logistics and trading and can no longer be ignored by the German health care sector. (Kern, 2006: 2-11). Some German hospitals have already discovered the capabilities of Auto-ID. Improvements in quality, safety and reductions in risk, cost and time are aspects and areas where improvements are achievable. Privacy protection, legal restraints, and the personal rights of patients and staff members are just a few aspects which make the heath care sector a sensible field for the implementation of Auto-ID. With the help of a quantifiable and science-based evaluation, an answer is sought as to which Auto-ID has the highest capability to be implemented in health care business. Based on the conclusion of the evaluation a practical, usable process model has to be devised. Practical experiences in the implementation and configuration of Auto-ID and interviews with Chief Technology Officers (CTO) of German hos-pitals have been the basis for this model. Title: CLASSIFICATION OF RFID ATTACKS Author(s): Aikaterini Mitrokotsa, Melanie R. Rieback and Andrew S. Tanenbaum Abstract: RFID (Radio Frequency Identification) systems are emerging as one of the most pervasive computing technologies in history due to their low cost and their broad applicability. Although RFID networks have many advantages, they also present a number of inherent vulnerabilities with serious potential security implications. This paper develops a structural methodology for risks that RFID networks face by developing a classification of RFID attacks, presenting their important features, and discussing possible countermeasures. The goal of the paper is to categorize the existing weaknesses of RFID communication so that a better understanding of RFID attacks can be achieved and subsequently more efficient and effective algorithms, techniques and procedures to combat these attacks may be developed. Title: IMPLEMENTING EPCIS WITH DEPCAS RFID MIDDLEWARE Author(s): Carlos Cerrada, Ismael Abad, José Antonio Cerrada and Ruben Heradio Abstract: RFID middleware is a new breed of software acquisition system that allows transfer data between device readers and business applications. RFID middleware has been structured in facto with several layers: infrastructure layer, event processor, tag data translator, rules and composite process, and application integration or EPCIS. EPCIS term is a generalization to refer the upper layer that link RFID middleware with external systems like SCMs, ERPs, WMSs or any user application that use auto identification data. In this paper we want to present the EPCIS DEPCAS layer. DEPCAS (Data EPC Acquisition System) is a RFID middleware proposal based on an extension of control and data acquisition systems (SCADAs). We examine the elements that compound EPCIS in DEPCAS based on SOA (Service-Oriented Architecture) and publish/subscribe message technologies implemented with JMS (Java Message Service). EPCIS in DEPCAS solves the two-way communication: it receives the configuration back-end information to increase the RFID semantic process defined with scenarios and offers the services to exploit the RFID acquisition results. Title: A UBIQUITOUS KNOWLEDGE-BASED SYSTEM TO ENABLE RFID OBJECT DISCOVERY IN SMART ENVIRONMENTS Author(s): M. Ruta, T. Di Noia, E. Di Sciascio, F. Scioscia and E. Tinelli Abstract: This paper presents an extended framework supported by a suitable dissemination protocol to enable ubiquitous Knowledge Bases (u-KBs) in pervasive RFID environments. A u-KB is a distributed and decentralized knowledge base where the factual knowledge (i.e., individuals) is scattered among objects disseminated within the environment, with no centralized repository and coordination. Title: ENHANCING SENSOR NETWORK CAPABILITIES THROUGH A COST-EFFECTIVE RFID TAG FOR SENSOR DATA TRANSMISSION Author(s): Luca Catarinucci, Luciano Tarricone, Riccardo Colella and Alessandra Esposito Abstract: The use of Radio Frequency Identification (RFID) technology for the automatic transmission of physical parameters in wireless sensor networks, could undoubtedly pave the way to a large class of attractive applications, ranging from healthcare, automotive, diagnostic systems, robotics and many others. Nevertheless, although some RFID tags capable to transmit sensor-like information are already on the market, only a restrict number of sensors, such as those for temperature or pressure measurement, can be easily miniaturized and embedded in the RFID chip. The integration of more complex sensors, in fact, appears to be complicated and extremely expensive. In this paper, a cost-effective general-purpose multi-ID tag is proposed. It can be connected to generic sensors, and is capable to transmit a proper combination of ID codes depending on the actual value at its input. Such a tag represents the natural evolution of standard RFID technology: neither the digital design nor the cost of the tag is substantially modified. Title: RFID BASED ANTI-COUNTERFEITING UTILIZING SUPPLY CHAIN PROXIMITY Author(s): Ali Dada and Carsten Magerkurth Abstract: This paper discusses a novel RFID-based approach to determine probabilities of items in a supply chain as being counterfeits based on their proximity to already identified counterfeits. The central idea is that items moving close to fakes are more likely to be fakes than items traveling with genuine items. The required proximity information can be deduced from events in EPCIS repositories for RFID-tagged items. The paper discusses two mathematical algorithms for calculating the probabilities and presents the results of a comparative simulation study. The results are discussed in terms of conclusions for a future implementation with RFID-tracked supply chains. Title: TAG LOSS PROBABILITY EVALUATION FOR A CONTINUOUS FLOW OF TAGS IN THE EPC-GLOBAL STANDARD Author(s): Javier Vales-Alonso, M.Victoria Bueno-Delgado, Esteban Egea-López and Joan García-Haro Abstract: This paper addresses the evaluation of a passive RFID system under a continuous flow of tag arrivals and departures, for instance, in a conveyor belt installation. In such confi-guration, the main operational variable is the Tag Loss Probability (TLP). Since tags stay in the coverage area of the reader for a finite amount of time, it is possible that some tags leave the area unidentified if many tags compete for being simultaneously identified. A suitable configuration of the system (flow speed, tags per block, time between blocks, etc.) must be selected to assure that TLP remains under a given operative threshold. In this paper we focus on the EPCglobal Class-1 Gen-2 standard, which specifies an anti-collision protocol based on Framed Slotted Aloha. Our work is aimed at evaluating the TLP for the different configurations of such protocol, and selecting the right scenario configuration to guarantee a TLP below a given limit. This issue has not been studied yet, despite of its relevance in real-world scenarios based on assembly lines or other dynamic environments. Simulation results show that both anti-collision protocol operation mode and flow configuration heavily impacts in the performance. Additionally, real test have been conducted which confirm simulation results. Workshop on Human Resource Information Systems Title: WHICH COMES FIRST E-HRM OR SHRM? Author(s): Janet H. Marler and Emma Parry Abstract: There has been some discussion in the literature of the relationship between e-HRM and strategic HRM. One body of literature argued that the use of e-HRM leads to a more strategic role for the HR function by freeing time and providing accurate information for HR practitioners (Parry and Tyson, 2006; Lawler and Mohrman, 2003; Lepak and Snell, 1998). An alternative argument has suggested that e-HRM is the result of a strategic HR orientation in that it is one means by which strategic HR can be practiced (Reddington and Martin, 2006; Broderick and Boudreau, 1992). This study aimed to disentangle these two arguments through the use of data from a large scale international HR survey. The results showed that e-HRM does not appear to be the linking mechanism that results in companies with HR strategies becoming more involved in setting business strategy. The study finds instead that e-HRM and strategic involvement are related indirectly based its relationship to a company’s HR strategy. Further research is needed, however, to establish whether e-HRM predicts or is an outcome of HR strategy and also whether involvement in setting business strategy predicts or is an outcome of having an HR strategy? This study provides a useful developmental contribution to our understanding of the role of e-HRM in strategic HR management. Title: INFORMATING HRM THROUGH »DATA MINING«? Author(s): Stefan Strohmeier and Franca Piazza Abstract: Beyond mere automation of tasks, a major potential of HRIS is to informate HRM. Within current HRIS the informate function is realized based on a data querying approach. Given a major innovation in data analysis subsumed under the concept of »data mining«, possibly valuable potentials to informate HRM are lost while overlooking the data mining approach. Our paper therefore aims at conceptual evaluation of the potentials of data mining to informate HRM. We hence discuss and evaluate data mining as a novel approach compared to data querying as the conventional approach of informating HRM. Based on a robust framework of informational contributions, our analysis reveals interesting potentials of data mining to generate explicative and prognostic information and hence to enrich and complement the querying approach. To deepen the knowledge on the contributions of data mining we finally derive recommendations for future research. Title: E-HRM AND IT GOVERNANCE: A USER DEPARTMENT’S PERSPECTIVE USING DIFFUSION OF INNOVATIONS (DOI) THEORY Author(s): Miguel R. Olivas-Luján and Gary W. Florkowski Abstract: IT Governance, the responsibility for systematically making decisions that will impact the acquisition, deployment, and overall usage of Information Technologies (IT) in a firm has been touted as “the single most important predictor of the value an organization generates from IT” (Weill & Ross, 2004; pp. 3-4). In this paper, we present the results of a survey-based study of US and Canadian firms that utilize ICTs for HR purposes (e-HRM). To investigate whether the mode of HR-IT Governance matters in terms of the intensity of usage of HR Technologies, we used Diffusion of Inno-vations (DOI) theory in a moderated mediation functional form (James & Brett, 1984). We find support for the notion that indeed, the way an organization as-signs responsibility for decision making for Human Resource ICTs makes a dif-ference in terms of the user (HR) and IS department factors that predict the in-tensity of HR Technology usage. Title: THE IMPACTS OF HRIS IMPLEMENTATION AND DEPLOYMENT ON HR PROFESSIONALS’ COMPETENCES: AN OUTLINE FOR A RESEARCH PROGRAM Author(s): Michel Delorme Abstract: This paper examines the problem of the required competencies for HR professionals in the context of their new roles and responsibilities due to the impact of HRIS’ implementation and deployment. From an integrative perspective insisting on the important roles of HR professionals as business partners and strategy’s architects, we analyze the new roles and responsibilities of HR professionals regarding specifically the implementation of HRIS. In this perspective, HR professionals act not only as users of the systems but also as architects of HRIS for the whole organization. Thus, new competencies in terms of knowledge, skills, abilities and other characteristics are required for HR professionals. We raise a certain number of essential questions for future HRIS research and suggest a methodological approach. Title: ACTIVITY THEORY AS AN INTERPRETIVE FRAMEWORK FOR HR SYSTEMS: SOME INSIGHTS AND POTENTIAL CONTRIBUTIONS Author(s): Mohamed Omar Mohamud Abstract: Human Resource (HR) systems have received much attention from the researchers and analysts in recent years. However, there are persistent themes that resonate among existing HR studies, revolving around the disharmonies between wider organisational strategies and individual-oriented HR systems, as well as the quest for stability in an environment of prevalent ambiguities. The study uses activity theory as interpretive and investigative framework to bridge the gaps in the way HR systems are analysed. A number of theoretical constructs that could potentially complement mainstream approaches are suggested and explained. Title: FROM IS TO ORGANISATION: ANALYSING THE USES OF A COLLABORATIVE IS IN A HIGH-TECH SME Author(s): Ewan Oiry, Amandine Pascal and Robert Tchobanian Abstract: - Title: ORGANIZATIONAL KNOWLEDGE AND CHANGE: THE ROLE OF TRANSFORMATIONAL HRIS Author(s): Huub Ruel and Rodrigo Magalhaes Abstract: In this paper we explore this loop by focussing on one type of IT application - Human Resources Information Systems (HRIS). HRIS, in turn, are made up of various types – operational, relational and transformational. It is our contention that transformational HRIS are better understood in the context of an organizational knowledge management framework. We suggest the organizational knowledge life cycle as a conceptual tool to analyse the impact of transformational HRIS on the state of organizational knowledge in the organization, thereby allowing conclusions to be taken not only about this type of HRIS but also about the process of organizational transformation itself. Title: ENGINEERING THE ORGANIZATION FROM THE BOTTOM UP Author(s): Marielba Zacarias, Rodrigo Magalhaes and José Tribolet Abstract: Whereas current modelling efforts are mostly directed at organizational perspectives, little attention has been paid to individual or inter-personal perspectives. Several approaches to modelling organization strategy, processes and resources have been developed. However, models for individual or inter-personal levels are scarce and have typically, different purposes. Research is needed to address the modelling of individual and interpersonal behaviours and the definition of proper ways of linking these behaviours with perspectives of higher organizational levels. Title: E-HRM-USER PERCEPTIONS OF E-HRM TOOLS IN KUWAITI SME’S Author(s): Tahseen AbuZaineh and Hubertus Ruel Abstract: e-HRM applications are playing an expanding role in modern organizations. Recently, large organizations have implemented e-HRM for different reasons. However, research about e-HRM is mainly conducted at large enterprises. Information regarding the perception of e-HRM users in small and medium-sized enterprises remains scarce. The existing literature presents different studies and surveys conducted about e-HRM in SMEs and in large organizations. Also, it presents the national cultural characteristics and the differences and similarities among cultures. Gaining more information about the perception of e-HRM users in SMEs will help us to compare e-HRM in SMEs with e-HRM in large organizations in the future. It will also be helpful for multinational business software providers to be aware of the cultural differences and to what extent the national culture effects the perceptions of e-HRM users in certain societies. Title: ORGANIZATIONAL CLIMATE FOR INNOVATION IMPLEMENTATION AND ICT APPROPRIATION: EXPLORING THE RELATIONSHIP THROUGH DISCOURSE ANALYSIS Author(s): Tanya Bondarouk and Huub Ruël Abstract: In this paper we explore the relationship between the organizational climate for innovation and ICT implementation success, defined as the stage in which end-users highly appropriate the newly implemented ICT. This exploration is guided by the question: how are organizational climate for innovation implementation and end-user appropriation of ICT related? We carried out a longitudinal case study in a hospital where new ICT had been implemented. We analyzed the organizational climate for innovation and end-user appropriation by means of discourse analysis. Title: USER PARTICIPATION AND INVOLVEMENT IN THE DEVELOPMENT OF HR SELF-SERVICE APPLICATIONS WITHIN THE DUTCH GOVERNMENT Author(s): Gerwin Koopman and Ronald Batenburg Abstract: This paper departed from the notion that user participation and involvement is one of the important factors for IS success. Five case studies portrayed are based on interviews with civil servants employed at different governmental organizations. In line with the findings from the literature, respondents argued that users should be involved early. A number of important lessons was learned by the respondents. The first was to mange expectancy management, i.e. keep users informed about developments and motives for certain decisions. Second, employees should use the self-service applications without much support from the HR-departments. A third important aspect is the distinct decision process public organizations deal with. Title: IMPROVING EMPLOYEE LIFE-CYCLE PROCESSES SUPPORT BY USING A WEB-ENABLED WORKFLOW SYSTEM: AN AGILE APPROACH Author(s): Leon Welicki, Francisco Javier Piqueres Juan, Fernando Llorente Martin and Victor de Vega Hernandez Abstract: Employee life-cycle management is a critical task that affects all companies without regard of their size and business. These processes include hiring new employees, changing their conditions (compensation package, position, organizational unit, etc.) and dismissing them. In this paper we will present our real-world experience building a web-enabled workflow system for managing employee life-cycle process instances in a big Spanish telecommunications company.

Page Updated on 09-04-2008