ICEIS 2010 Abstracts


Area 1 - Databases and Information Systems Integration

Full Papers
Paper Nr: 16
Title:

MULTI-PROCESS OPTIMIZATION VIA HORIZONTAL MESSAGE QUEUE PARTITIONING

Authors:

Matthias Boehm, Dirk Habich and Wolfgang Lehner

Abstract: Message-oriented integration platforms execute integration processes—in the sense of workflow-based process specifications of integration tasks—in order to exchange data between heterogeneous systems and applications. The overall optimization objective is throughput maximization, i.e., maximizing the number of processed messages per time period. Here, moderate latency time of single messages is acceptable. The efficiency of the central integration platform is crucial for enterprise data management because both the data consistency between operational systems and the up-to-dateness of analytical query results depend on it. With the aim of integration process throughput maximization, we propose the concept of multi-process optimization (MPO). In this approach, messages are collected during a waiting period and executed in batches to optimize sequences of process instances of a single process plan. We introduce a horizontal—and thus, valuebased—partitioning approach for message batch creation and show how to compute the optimal waiting time with regard to throughput maximization. This approach significantly reduces the total processing time of a message sequence and hence, it maximizes the throughput while accepting moderate latency time.

Paper Nr: 119
Title:

SOFIA: AGENT SCENARIO FOR FOREST INDUSTRY - Tailoring UBIWARE Platform Towards Industrial Agent-driven Solutions

Authors:

Sergiy Nikitin, Vagan Terziyan and Minna Lappalainen

Abstract: Current economical situation in Finnish forest industry desperately calls for higher degree of efficiency in all stages of the production chain. The competitiveness of timber-based products directly and heavily depends on the raw material cost. At the same time, the successes of companies, that use timber, determine the volumes of the raw wood consumption and, therefore, drive forest markets. However, wood consuming companies (e.g. paper producers) can not unilaterally dictate logging and transportation prices to their contractors, because profitability of those, has already reached its reasonable margins (Vesterinen, 2005, Penttinen, 2009). Recent research conducted in 2005-2008 shows extremely high degree of inefficiency in logistic operations amongst logging and transportation companies. Some of them have already realized the need for cooperative optimization, which calls for cross-company integration of existing information and control systems; however privacy and trust issues prohibit those companies from taking the open environment solutions. Therefore, the researchers have suggested new mediator-based business models that leverage the utilization and preserve current state of affairs at the same time. New business solutions for logistic optimization can be built, when a unified view on the market players is possible. Nevertheless, with fast development of communications, RFID and sensor technologies, forest industry sector is experiencing a technological leap. The adoption of innovative technologies opens possibilities for enactment of new business scenarios driven by bleeding edge ICT tools and technologies. We introduce an application scenario of the semantic agent platform called UBIWARE to the forest industry sector of Finland.

Paper Nr: 130
Title:

WORKFLOW MANAGEMENT ISSUES IN VIRTUAL ENTERPRISE NETWORKS

Authors:

André Kolell and Jeewani Anupama Ginige

Abstract: Increasing competitive pressure and availability of Internet and related technologies have stimulated the collaboration of independent businesses. Such collaborations, aimed at achieving common business goals, are referred to as virtual enterprise networks (VENs). Though web is an excellent platform to collaborate, the requirements of VENs regarding workflow management systems are in excess those of autonomous organizations. This paper provides a comprehensive overview of numerous issues related to workflow managements in VENs. These issues are discussed in the three phases of virtual enterprise lifecycle: configuration, operation and dissolution; and corroborated by two real case studies of VENs in Australia.

Paper Nr: 151
Title:

ACCESS RIGHTS IN ENTERPRISE FULL-TEXT SEARCH - Searching Large Intranets Effectively using Virtual Terms

Authors:

Jan Kasprzak, Michal Brandejs, Matěj Čuhel and Tomaš Obšivač

Abstract: One of the toughest problems to solve when deploying an enterprise-wide full-text search system is to handle the access rights of the documents and intranet web pages correctly and effectively. Post-processing the results of general-purpose full-text search engine (filtering out the documents inaccessible to the user who sent the query) can be an expensive operation, especially in large collections of documents. We discuss various approaches to this problem and propose a novel method which employs virtual tokens for encoding the access rights directly into the search index. We then evaluate this approach in an intranet system with several millions of documents and a complex set of access rights and access rules.

Paper Nr: 202
Title:

PROCESS-BASED DATA STREAMING IN SERVICE-ORIENTED ENVIRONMENTS - Application and Technique

Authors:

Steffen Preissler, Dirk Habich and Wolfgang Lehner

Abstract: Service-oriented environments increasingly become the central point for enterprise-related workflows. This also holds for data-intensive service applications, where such process types encounter performance and resource issues. To tackle these issues on a more conceptual level, we propose a stream-based process execution that is inspired by the typical execution semantics in data management environments. More specifically, we present a data and process model including a generalized concept for stream-based services. In our evaluation, we show that our approach outperforms the execution model of current service-oriented process environments.

Paper Nr: 217
Title:

A PLATFORM DEDICATED TO SHARE AND MUTUALIZE ENVIRONMENTAL APPLICATIONS

Authors:

Thérèse Libourel, Yuan Lin, Isabelle Mougenot, Christelle Pierkot and Jean Christophe Desconnets

Abstract: Scientists of the environmental domains (biology, geographical information, etc.) need to capitalize, distribute and validate their scientific experiments of varying complexities. A multi-function platform will be an adaptable candidate for replying this request. After a short introduction of our context and objective, this article presents the project MDWeb, a platform that we have conceived and realized for sharing and mutualizing geographic data. Based on this platform, our main interest is actually focused on providing users a workflow environment, which will be integrated soon after in this platform as a functional component. An introduction to a three-level workflow environment architecture (static, intermediate, dynamic) is presented. In this article, we focus mainly on the ”static” level, which concerns the first phase of constructing a business process chain, and a discussion around the ”intermediate” level, which covers both the instantiation of a business process chain and the validation, in terms of conformity, of the generated chain.

Paper Nr: 252
Title:

ONTOP - A Process to Support Ontology Conceptualization

Authors:

Elis Montoro Hernandes, Deysiane Sande and Sandra Fabbri

Abstract: Although there are tools that support the ontology construction, such tools do not necessarily take heed to the conceptualization phase in its need of execution resources. The objective of this paper is to present the ONTOP Process (ONTOlogy conceptualization Process) as an effective means of enhancing the conceptualization phase of the ontology construction. This process is supported by the ONTOP-Tool which provides an iterative way to defining a collaborative glossary and uses a visual metaphor to facilitate the identification of the ontology components. Once the components are defined, it is possible to generate an OWL file that can be used as an input to other ontology editors. The paper also presents an application of the both process and the tool, which emphasizes the contributions of this proposal.

Paper Nr: 262
Title:

PW-PLAN - A Strategy to Support Iteration-based Software Planning

Authors:

Deysiane Sande, Arnaldo Sanchez, Renan Montebelo, Sandra Fabbri and Elis Montoro Hernandes

Abstract: Background: Although there are many techniques in the literature that support software size estimation, iteration-based software development planning is still based on developers’ personal experience in most companies. Particularly for the agile methods, iterations estimation must be as precise as possible, since the success of this kind of development is intrinsically related to this fact. Aim: In order to establish a systematic planning of iterations, this article presents the PW-Plan (Piece of Work Planning) strategy. This strategy is based on four items: the iterative development, the use of a technique to estimate the complexity of the work to be done, the adoption of personal planning practices and the constant evaluation of the Effort Level (EL). Method: PW-Plan evolved from another strategy that was elaborated based on the systematic practice of using Use Case Points, Personal Software Process and constant EL evaluation. Results: PW-Plan was used by two small businesses companies in two case studies and showed that its application is feasible from the practical point of view and that it enhances the development control. Conclusion: The case studies provide insights of the PW-Plan contribution for both the developer’s and the manager’s processes. Furthermore, the strategy application provides more precise estimations for each iteration.

Paper Nr: 347
Title:

EVALUATING THE QUALITY OF FREE/OPEN SOURCE ERP SYSTEMS

Authors:

Lerina Aversano, Igino Pennino and Maria Tortorella

Abstract: The selection and adoption of open source ERP projects can significantly impact the competitiveness of organizations. Small and Medium Enterprises have to deal with major difficulties due to the limited resources available for performing the selection process. This paper proposes a framework for evaluating the quality of Open Source ERP systems. The framework is obtained through a specialization of a more general one, called EFFORT (Evaluation Framework for Free/Open souRce projects). The usefulness of the framework is investigated through a case study.

Paper Nr: 391
Title:

ENGINEERING PROCESS FOR CAPACITY-DRIVEN WEB SERVICES

Authors:

Imen Benzarti, Samir Tata, Zakaria Maamar, Nejib Ben Hadj-Alouane and Moez Yeddes

Abstract: This paper presents a novel approach for the engineering of capacity-driven Web services. By capacity, we mean how a Web service is empowered with several sets of operations from which it selectively triggers a set of operations with respect to some run-time environmental requirements. Because of the specificities of capacity-driven Web services compared to regular (i.e., mono-capacity) Web services, their engineering in terms of design, development, and deployment needs to be conducted in a complete specific way. Our approach define an engineering process composed of five steps: (1) to frame the requirements that could be put on these Web services, (2) to define capacities and how these capacities are triggered, and last but not least link these capacities to requirements, (3) to identify the processes in term of business logic that these Web services could implement, (4) to generate the source code, and (5) to generate the C apacity-driven Web Services Description Language (C -WSDL).

Paper Nr: 392
Title:

FLEXIBLE DATA ACCESS IN ERP SYSTEMS

Authors:

Vadym Borovskiy, Wolfgang Koch and Alexander Zeier

Abstract: Flexible data access is a necessary prerequisite to satisfy a number of acute needs of ERP system users and application developers. However, currently available ERP systems do not provide the ability to access and manipulate ERP data at any granularity level. This paper contributes with the concept of query-like service invocation implemented in the form of a business object query language (BOQL). Essentially, BOQL provides on-the-fly orchestration of CRUD-operations of business objects in an ERP system and allows to achieve both the flexibility of SQL and encapsulation of SOA. To demonstrate the power of the suggested concept navigation, configuration and composite application development scenarios are presented in the paper. All suggestions have been prototyped with Microsoft .Net platform.

Paper Nr: 393
Title:

A DISTRIBUTED ALGORITHM FOR FORMAL CONCEPTS PROCESSING BASED ON SEARCH SUBSPACES

Authors:

Nilander R. M. de Moraes, Luis E. Zárate and Henrique C. Freitas

Abstract: The processing of dense contexts is a common problem in Formal Concept Analysis. From input contexts, all possible combinations must be evaluated in order to obtain all correlations between objects and attributes. The state-of-the-art shows that this problem can be solved through distributed processing. Partial concepts would be obtained from a distributed environment composed of machine clusters in order to achieve the final set of concepts. Therefore, the goal of this paper is to propose, develop, and evaluate a distributed algorithm with high performance to solve the problem of dense contexts. The speedup achieved through the distributed algorithm shows an improvement of performance, but mainly, a high-balance workload which reduces the processing time considerably. For this reason, the main contribution of this paper is the distributed algorithm, capable of accelerating the processing for dense formal contexts.

Short Papers
Paper Nr: 29
Title:

SW-ONTOLOGY - A Proposal for Semantic Modeling of a Scientific Workflow Management System

Authors:

Wander Gaspar, Laryssa Silva, Regina Braga and Fernanda Campos

Abstract: The execution of scientific experiments based on computer simulations constitutes an important contribution to scientific community. In this sense, the implementation of a scientific workflow can be automated by Scientific Workflow Management Systems, which goal is to provide the orchestration of all processes involved. It aims to capture the semantic related to the implementation of scientific workflows using ontologies that could capture the knowledge involved in these processes. Specifically, we present a prototype of an ontology based on a design pattern called Model View for the representation of knowledge in scientific workflow management systems.

Paper Nr: 59
Title:

SOLUTIONS FOR SPEEDING-UP ON-LINE DYNAMIC SIGNATURE AUTHENTICATION

Authors:

Valentin Andrei, Sorin Mircea Rusu and Ştefan Diaconescu

Abstract: The article presents a study and its experimental results, over methods of speeding-up authentication of the dynamic handwritten signature, in an on-line authentication system. We describe 3 solutions, which use parallel computing by choosing a 16 processor server, a FPGA development board and a graphics card, designed with nVidia CUDA technology. For each solution, we detail how can we integrate it into an authentication provider system, and we specify its advantages and disadvantages.

Paper Nr: 89
Title:

INTEGRATION OF REPOSITORIES IN ELEARNING SYSTEMS

Authors:

José Paulo Leal and Ricardo Queirós

Abstract: The wide acceptance of digital repositories today in the eLearning field raises several interoperability issues. In this paper we present the interoperability features of a service oriented repository of learning objects called crimsonHex. These features are compliant with the existing standards and we propose extensions to the IMS interoperability recommendation, adding new functions, formalizing message interchange and providing also a REST interface. To validate the proposed extensions and its implementation in crimsonHex we developed a repository plugin for Moodle 2.0 that is expected to be included in the next release of this popular learning management system.

Paper Nr: 90
Title:

COLLABORATIVE KNOWLEDGE EVALUATION WITH A SEMANTIC WIKI - WikiDesign

Authors:

Davy Monticolo, Samuel Gomes, Vincent Hilaire and Abder Koukam

Abstract: We will present in this paper how to ensure a knowledge evaluation and evolution in a knowledge management system by using a Semantic Wiki approach. We describe a Semantic Wiki called WikiDesign which is a component of a Knowledge Management system. Currently WikiDesign is use in engineering department of companies to emphasize technical knowledge. In this paper, we will explain how WikiDesign ensures the reliability of the knowledge base thanks to a knowledge evaluation process. After explaining the interest of the use of semantic wikis in knowledge management approach we describe the architecture of WikiDesign with its semantic functionalities. At the end of the paper, we prove the effectiveness of WikiDesign with a knowledge evaluation example for an industrial project.

Paper Nr: 95
Title:

CLUX - Clustering XML Sub-trees

Authors:

Stefan Böttcher, Rita Hartel and Christoph Krislin

Abstract: XML has become the de facto standard for data exchange in enterprise information systems. But whenever XML data is stored or processed, e.g. in form of a DOM tree representation, the XML markup causes a huge blow-up of the memory consumption compared to the data, i.e., text and attribute values, contained in the XML document. In this paper, we present CluX, an XML compression approach based on clustering XML sub-trees. CluX uses a grammar for sharing similar substructures within the XML tree structure and a cluster-based heuristics for greedily selecting the best compression options in the grammar. Thereby, CluX allows for storing and exchanging XML data in a space efficient and still queryable way. We evaluate different strategies for XML structure sharing, and we show that CluX often compresses better than XMill, Gzip, and Bzip2, which makes CluX a promising technique for XML data exchange whenever the exchanged data volume is a bottleneck in enterprise information systems.

Paper Nr: 103
Title:

IMPROVING REAL WORLD SCHEMA MATCHING WITH DECOMPOSITION PROCESS

Authors:

Sana Sellami, Aïcha-Nabila Benharkat, Youssef Amghar and Frédéric Flouvat

Abstract: This paper tends to provide an answer to a difficult problem: Matching large XML schemas. Scalable Matching acquires a long execution time other than decreasing the quality of matches. In this paper, we propose an XML schema decomposition approach as a solution for large schema matching problem. The presented approach identifies the common structures between and within XML schemas, and decomposes these input schemas. Our method uses tree mining techniques to identify these common structures and to select the most relevant sub-parts of large schemas for matching. As proved by our experiments in e-business domain, the proposed approach improves the performance of schema matching and offers a better quality of matches in comparison to other existing matching tools.

Paper Nr: 107
Title:

CHIEF INFORMATION OFFICER PROFILE IN HIGHER EDUCATION

Authors:

Adam Marks and Yacine Rezgui

Abstract: The purpose of this paper is to provide an unbiased picture of what many universities seek and expect in terms of requirements, attributes, competencies, and expected functions and duties in IT executive candidates, namely the Chief Information Officer (CIO). The paper provides a benchmark that can be used by universities to compare or build their IT leadership profiles. The study examines the education, experience, and other skills requirements in 374 active and archived electronic web advertisements for the position of CIO. It also looks at the expected job requirements for that role. The findings suggest that universities’ criteria and requirements for CIOs may vary based on factors such as the organization size and needs, the complexity of the organizational structure, and available budget, but are overall comparable with those of other industries.

Paper Nr: 115
Title:

CONSTRAINT CHECKING FOR NON-BLOCKING TRANSACTION PROCESSING IN MOBILE AD-HOC NETWORKS

Authors:

Sebastian Obermeier and Stefan Böttcher

Abstract: Whenever business transactions involve databases located on different mobile devices in a mobile ad-hoc network, transaction processing should guarantee the following: atomic commitment and isolation of distributed transactions and data consisteny across different mobile devices. However, a major problem of distributed atomic commit protocols in mobile network scenarios is infinite transaction blocking, which occurs when a local sub-transaction that has voted for commit cannot be completed due to the loss of commit messages and due to network partitioning. For such scenarios, Bi-State-Termination has been recently suggested to terminate pending and blocked transactions, which allows to overcome the infinite locking problem. However, if the data distributed on different mobile devices has to be consistent according to some local or global database consistency constraints, Bi-State-Termination has not been able to check for the validity of these consistency constraints on a database state involving the data of different mobile devices. Within this paper, we extend the concept of Bi-State-Termination to arbitrary read operations. We show how to handle several types of database consistency constraints, and experimentally evaluate our constraint checker using the TPC-C benchmark.

Paper Nr: 120
Title:

ABSORPTION OF INFORMATION PROVIDED BY BUSINESS INTELLIGENCE SYSTEMS - The Effect of Information Quality on the Use of Information in Business Processes

Authors:

Aleš Popovič, Pedro Simões Coelho and Jurij Jaklič

Abstract: The fields of business intelligence and business intelligence systems have been gaining relative significance in the scientific area of decision support and decision support systems. In order to better understand mechanisms for providing benefits of business intelligence systems, this research establishes and empirically tests a model of business intelligence systems’ maturity impact on the use of information in organizational operational and managerial business processes, where this effect is mediated by information quality. Based on empirical investigation from Slovenian medium and large-size organizations the proposed structural model has been analyzed. The findings suggest that business intelligence system maturity positively impacts both segments of information quality, yet the impact of business intelligence system maturity on information media quality is greater than the impact on content quality. Moreover, the impact of information content quality on the use of information is much larger than the impact of information media quality. Consequently, when introducing business intelligence systems organizations clearly need to focus more on information content quality issues than they do currently.

Paper Nr: 120
Title:

ABSORPTION OF INFORMATION PROVIDED BY BUSINESS INTELLIGENCE SYSTEMS - The Effect of Information Quality on the Use of Information in Business Processes

Authors:

Aleš Popovič, Pedro Simões Coelho and Jurij Jaklič

Abstract: The fields of business intelligence and business intelligence systems have been gaining relative significance in the scientific area of decision support and decision support systems. In order to better understand mechanisms for providing benefits of business intelligence systems, this research establishes and empirically tests a model of business intelligence systems’ maturity impact on the use of information in organizational operational and managerial business processes, where this effect is mediated by information quality. Based on empirical investigation from Slovenian medium and large-size organizations the proposed structural model has been analyzed. The findings suggest that business intelligence system maturity positively impacts both segments of information quality, yet the impact of business intelligence system maturity on information media quality is greater than the impact on content quality. Moreover, the impact of information content quality on the use of information is much larger than the impact of information media quality. Consequently, when introducing business intelligence systems organizations clearly need to focus more on information content quality issues than they do currently.

Paper Nr: 121
Title:

AN OPEN SOURCE SOFTWARE BASED LIBRARY CATALOGUE SYSTEM USING SOUNDEXING RETRIEVAL AND QUERY CACHING

Authors:

Zhenghui Pan, Yan Zhang and Jiansheng Huang

Abstract: It has been a challenge to apply effective knowledge management tools in information systems for modern libraries that deliver the most up-to-date, relevant information to end users in a quick, efficient, and user-friendly manner. In this paper, the authors present the design of a library catalogue system using principally open source software. The integrated web-based library system takes client/server architecture with multiple tiers. For performance enhancement with respect to error tolerance, searching speed and scalability, techniques of soundexing retrieval and query caching were applied. With the support of an appropriately designed soundex algorithm, the catalogue system can largely increase its recall while not compromising the search precision. On the other side, the introduced query cache speeds up the system response significantly.

Paper Nr: 129
Title:

USE DATA MINING TO IMPROVE STUDENT RETENTION IN HIGHER EDUCATION - A Case Study

Authors:

Ying Zhang, Samia Oussena, Tony Clark and Hyeonsook Kim

Abstract: Data mining combines machine learning, statistics and visualization techniques to discover and extract knowledge. One of the biggest challenges that higher education faces is to improve student retention (National Audition Office, 2007). Student retention has become an indication of academic performance and enrolment management. Our project uses data mining and natural language processing technologies to monitor student, analyze student academic behaviour and provide a basis for efficient intervention strategies. Our aim is to identify potential problems as early as possible and to follow up with intervention options to enhance student retention. In this paper we discuss how data mining can help spot students ‘at risk’, evaluate the course or module suitability, and tailor the interventions to increase student retention.

Paper Nr: 147
Title:

MIGRATING LEGACY SYSTEMS TO A SERVICE-ORIENTED ARCHITECTURE WITH OPTIMAL GRANULARITY

Authors:

Saad Alahmari, Ed Zaluska and David De Roure

Abstract: The enhanced interoperability of business systems based on Service-Oriented Architecture (SOA) has created an increased demand for the re-engineering and migration of legacy software systems to SOA-based systems. Existing approaches focus mainly on defining coarse-grained services corresponding to business requirements, and neglect the importance of optimising service granularity based on service reusability, governance, maintainability and cohesion. An improved migration of legacy systems onto SOA-based systems requires identifying the ‘right’ services with an appropriate level of granularity. This paper proposes a novel framework for the effective identification of the key services in legacy code to provide such an optimal mapping. The framework focuses on identifying these services (based on standardized modelling languages UML and BPMN) and provides effective guidelines for identifying optimal service granularity over a wide range of possible service types.

Paper Nr: 154
Title:

WHAT-IF ANALYSIS IN OLAP - With a Case Study in Supermarket Sales Data

Authors:

Emiel Caron and Hennie Daniels

Abstract: Today’s OnLine Analytical Processing (OLAP) or multi-dimensional databases have limited support for whatif or sensitivity analysis. What-if analysis is the analysis of how the variation in the output of a mathematical model can be assigned to different sources of variation in the model’s input. This functionality would give the OLAP analyst the possibility to play with “What if ...?”-questions in an OLAP cube. For example, with questions of the form: “What happens to an aggregated value in the dimension hierarchy if I change the value of this data cell by so much?” These types of questions are, for example, important for managers that want to analyse the effect of changes in sales on a product’s profitability in an OLAP supermarket sales cube. In this paper, we extend the functionality of the OLAP database with what-if analysis.

Paper Nr: 169
Title:

GOING VIRTUAL - Popular Trend or Real Prospect for Enterprise Information Systems

Authors:

Mariana Carroll, Paula Kotzé and Alta van der Merwe

Abstract: Organisations are faced with a number of challenges and issues in decentralised, multiple-server, physical, non-virtualized IT environments. Virtualization in recent years has had a significant impact on computing environments and has introduced benefits, including server consolidation, server and hardware utilization and reduced costs. Virtualization’s popularity has led to its growth in many IT environments. This paper provides an overview of the IT challenges in non-virtualized environments and addresses the question of whether virtualization provides the solution to these IT challenges.

Paper Nr: 173
Title:

ORGANIZATIONAL KNOWLEDGE MANAGEMENT THROUGH SOFTWARE PROCESS REUSE AND CASE-BASED REASONING

Authors:

Viviane A. Santos and Mariela I. Cortés

Abstract: Software process reuse involves different aspects of the knowledge obtained from generic process models and previous successful projects. The benefit of reuse is reached by the definition of an effective and systematic process to specify, produce, classify, retrieve and adapt software artifacts for utilization in another context. In this work we present a formal approach for software process reuse to assist the definition, adaptation and improvement of the organization’s standard process. A tool based on the Case-Based Reasoning technology is used to manage the collective knowledge of the organization.

Paper Nr: 176
Title:

ATTACK SCENARIOS FOR POSSIBLE MISUSE OF PERIPHERAL PARTS IN THE GERMAN HEALTH INFORMATION INFRASTRUCTURE

Authors:

Ali Sunyaev, Alexander Kaletsch, Sebastian Dünnebeil and Helmut Krcmar

Abstract: This paper focuses on functional issues within the peripheral parts of the German health information infrastructure, which compromise security and patient’s information safety or might violate law. Our findings demonstrate that a misuse of existing functionality is possible. With examples and detailed use cases we show that the health infrastructure can be used for more than just ordinary electronic health care services. In order to investigate this evidence from the laboratory, we tested all attack scenarios in a typical German physician’s practice. Furthermore, security measures are provided to overcome the identified threats and questions regarding these issues are discussed.

Paper Nr: 185
Title:

ESTIMATING SOFTWARE DEVELOPMENT EFFORT USING TABU SEARCH

Authors:

Filomena Ferrucci, Carmine Gravino, Rocco Oliveto and Federica Sarro

Abstract: Some studies have been recently carried out to investigate the use of search-based techniques in estimating software development effort and the results reported seem to be promising. Tabu Search is a meta-heuristic approach successfully used to address several optimization problems. In this paper, we report on an empirical analysis carried out exploiting Tabu Search on two publicly available datasets, i.e., Desharnais and NASA. On these datasets, the exploited Tabu Search settings provided estimates comparable with those achieved with some widely used estimation techniques, thus suggesting for further investigations on this topic.

Paper Nr: 192
Title:

FOSTERING IT-ENABLED BUSINESS INNOVATIONS - An Approach for CIOs to Innovate the Business

Authors:

Michael Lang and Michael Amberg

Abstract: Nowadays companies are in worldwide competition for innovations which are essential to ensure their competitiveness and consequently their business success. Information Technology (IT) plays an important role in this context. Thus, it is expected from IT organizations to provide an own value contribution to the corporate performance by delivering innovations. This raises the questions how chief information officers (CIO) can facilitate the generation of IT-enabled business innovations. As a result of our research, we identified 22 factors concerning the relevant aspects in IT organizations - IT projects, IT systems, IT processes, IT services, IT personnel and aspects referring to the organizational structure. These factors help CIOs to foster the generation of IT-enabled innovations and therefore should be considered in the management of IT organizations.

Paper Nr: 199
Title:

AdaptIE - Using Domain Language Concept to Enable Domain Experts in Modeling of Information Extraction Plans

Authors:

Wojciech M. Barczyñski, Felix Förster, Falk Brauer and Daniel Schuster

Abstract: Implementing domain specific Information Extraction (IE) technologies to retrieve structured information from unstructured data is a challenging and complex task. It requires both IE expertise (e.g., in linguistics) and domain knowledge, provided by a domain expert who is aware of, say, the text corpus specifics and entities of interest. While the IE expert role is addressed by several approaches, less has been done in enabling domain experts in the process of IE development. Our approach targets this issue. We provide a base platform for collaboration of experts through IE plan modeling languages used to compose basic IE operators into complex IE flows. We provide each of the experts with a language that is adapted to their respective expertise. IE experts leverage a fine grained view and domain experts use a coarse grain view on execution of IE. We use Model Driven Architecture concept to enable transition among the languages and operators provided by an algebraicIE framework. To prove applicability of our approach we implemented an Eclipse based tool –AdaptIE– and demonstrate it in a real world scenario for the SAP Community Network.

Paper Nr: 229
Title:

EFFICIENT SEMI-AUTOMATIC MAINTENANCE OF MAPPING BETWEEN ONTOLOGIES IN A BIOMEDICAL ENVIRONMENT

Authors:

Imen Ketata, Riad Mokadem, Franck Morvan and Abdelkader Hameurlain

Abstract: In dynamic environments like data management in biomedical domain, adding a new element (e.g. concept) to an ontology O1 requires significant mapping creations between O1 and the ontologies linked to it. To avoid this mapping creation for each element addition, old mappings can be reused. Hence, the nearest element w to the added one should be retrieved in order to reuse its mapping schema. In this paper, we deal with the existing additive axiom which can be used to retrieve this w. However, in such axiom, the usage of some parameters like the number of element occurrence appears insufficient. We introduce the calculation of similarity and the user’s opinion note in order to have more precision and semantics in the w retrieval. An illustrative example is presented to estimate our contribution.

Paper Nr: 235
Title:

AN INCREMENTAL PROCESS MINING ALGORITHM

Authors:

André Kalsing, Lucinéia Heloisa Thom and Cirano Iochpe

Abstract: A number of process mining algorithms have already been proposed to extract knowledge from application execution logs. This knowledge includes the business process itself as well as business rules, and organizational structure aspects, such as actors and roles. However, existent algorithms for extracting business processes neither scale very well when using larger datasets, nor support incremental mining of logs. Process mining can benefit from an incremental mining strategy especially when the information system source code is logically complex, requiring a large dataset of logs in order for the mining algorithm to discover and present its complete business process behavior. Incremental process mining can also pay off when it is necessary to extract the complete business process model gradually by extracting partial models in a first step and integrating them into a complete model in a final step. This paper presents an incremental algorithm for mining business processes. The new algorithm enables the update as well as the enlargement, and improvement of a partial process model as new log records are added to the log file. In this way, processing time can be significantly reduced since only new event traces are processed rather than the complete log data.

Paper Nr: 264
Title:

USING SQL/XML FOR EFFICIENTLY TRANSLATING QUERIES OVER XML VIEW OF RELATIONAL DATA

Authors:

Fernando Lemos, Clayton Costa and Vânia Vidal

Abstract: XML and Web services are becoming the standard means of publishing and integrating data over the Web. A convenient way to provide universal access (over the Web) to different data sources is to implement Web services that encapsulate XML views of the underlying relational data (Data Access Web Services). Given that most business data is currently stored in relational database systems, the generation of Web services for publishing XML view of relational data has special significance. In this work, we propose RelP, a framework for publishing and querying relational databases through XML views. The main contribution of the paper is an algorithm that translates XML queries over a published XML view schema into a single SQL/XML query over the data source schema.

Paper Nr: 266
Title:

A FLEXIBLE FRAMEWORK FOR APPLYING DATA ACCESS AUTHORIZATION BUSINESS RULES

Authors:

Leonardo Guerreiro Azevedo, Sergio Puntar, Raphael Thiago, Fernanda Baião and Claudia Cappelli

Abstract: This work proposes a flexible framework for managing and implementing data access authorization business rules on top of relational DBMSs, in an independent way for the applications accessing a database. The framework adopts the RBAC policy definition approach, and was implemented on Oracle DBMS. Therefore, data access security is managed by the data server layer in a centralized manner, rather than in each application that accesses data, and is enforced by the database server. Experimental tests were executed using the TPCH Benchmark workload, and the results indicate the effectiveness of our proposal.

Paper Nr: 279
Title:

A GENETIC PROGRAMMING APPROACH TO SOFTWARE COST MODELING AND ESTIMATION

Authors:

Efi Papatheocharous, Angela Iasonos and Andreas S. Andreou

Abstract: This paper investigates the utilization of Genetic Programming (GP) as a method to facilitate better software cost modeling and estimation. The aim is to produce and examine candidate solutions in the form of representations that utilize operators and operands, which are then used in algorithmic cost estimation. These solutions essentially constitute regression equations of software cost factors, used to effectively estimate the dependent variable, that is, the effort spent for developing software projects. The GP application generates representative rules through which the usefulness of various project characteristics as explanatory variables, and ultimately as predictors of development effort is investigated. The experiments conducted are based on two publicly available empirical datasets typically used in software cost estimation and indicate that the proposed approach provides consistent and successful results.

Paper Nr: 314
Title:

BIDM - The Business Intelligence Development Model

Authors:

Catalina Sacu and Marco Spruit

Abstract: Business Intelligence (BI) has been a very dynamic and popular field of research in the last few years as it helps organizations in making better decisions and increasing their profitability. This paper aims at creating some structure in the BI field of research by creating a BI development model that relates the current BI development stages and their main characteristics. This framework can be used by organizations to identify their current BI stage and provide insight into how to improve their BI function.

Paper Nr: 319
Title:

A CYBER-PHYSICAL SYSTEM FOR ELDERS MONITORING

Authors:

Xiang Li, Ying Qiao and Hongan Wang

Abstract: In a new life style, elders can stay in multiple places instead of a single place. The new life style has two features, i.e. multiple scenarios and changes of status of elders, which lead challenges to traditional elder monitoring systems in cooperation and flexibility. This paper presents a new cyber-physical system for elders monitoring, which is divided to two layers, a sub-system layer and a global service layer. Active databases with real-time event-condition-action rule reasoning system are used as core components in sub-systems and the global service to detect risks and cooperate with other systems actively, intelligently, and at real-time. Structure of an ECA rule reasoning system is flexible, in which ECA rules can be adjusted on-the-fly. We discuss some properties in the cyber-physical system, including cooperation among sub-systems, flexibility and real-time property. At last, we present a case study to validate our works.

Paper Nr: 361
Title:

FINDING AND CLASSIFYING PRODUCT RELATIONSHIPS USING INFORMATION FROM THE PUBLIC WEB

Authors:

Daniel Schuster, Till M. Juchheim and Alexander Schill

Abstract: Relationships between products such as accessory or successor products are hard to find on the Web or have to be inserted manually in product information systems. Finding and classifying such relations automatically using information from the publicWeb only offers great value for customers and vendors as it helps to improve the buying process at low cost. We present and evaluate algorithms and methods for product relationship extraction on the Web requiring only a set of clustered product names as input. The solution can be easily implemented in different product information systems most useful in but not necessarily restricted to the application domain of online shopping.

Paper Nr: 380
Title:

TOWARDS LOCATION-BASED SERVICES STANDARDIZATION - An Application based on Mobility and Geo-location

Authors:

El Kindi Rezig and Valérie Monfort

Abstract: Location based services (LBSs) have improved users’ life drastically because of their useful applications such as location-based advertising, vehicle tracking, mobile commerce… etc. However most LBS applications are provider-specific and can’t (generally) cohabit with other LBS applications from other providers, consequently, consumers have to use different platforms and applications in order to discover and interact with different LBS applications. In our paper, we present an infrastructure based on geo-location and Web services to promote the uniformity of the way LBS providers publish their services on one hand, and the way consumers discover and interact with the LBSs they look for on the other hand. Moreover the proposed infrastructure is context-aware and promotes dynamical services accessibility by offering consumers, as they are moving, the nearest services they are interested in.

Paper Nr: 408
Title:

DEFINING AN UNIFIED META MODELING ARCHITECTURE FOR DEPLOYMENT OF DISTRIBUTED COMPONENTS-BASED SOFTWARE APPLICATIONS

Authors:

Mariam Dibo and Noureddine Belkhatir

Abstract: Deployment is a complex process gathering activities to make applications operational after development, Today, the components approach and the distribution make deployment a very complex process. Many deployment tools exist but they are often built in an ad hoc way; i.e. specific to a technology or to an architecture and, covering partially the deployment life cycle. Hence there is an increased need for new techniques and tools to manage these systems. In this work, we focus on the deployment process describing a framework called UDeploy. UDeploy (Generic Deployment framework) is a framework based on a generic engine which permits firstly the carrying out of the planning process from meta-information related to the application and the infrastructure; secondly, the generation of specific deployment descriptors related to the application and the environment (i.e. the machines connected to a network where a software system is deployed); and finally the execution of a plan produced by means of deployment strategies. The work presented in this paper is focused on the presentation of a generic deployment architecture driven by meta-models and their transformations. In this respect, UDeploy is independent from any specific technology and, also from any specific platform characteristic.

Paper Nr: 418
Title:

p-MDAG - A Parallel MDAG Approach

Authors:

Joubert de Castro Lima and Celso Massaki Hirata

Abstract: In this paper, we present a novel parallel full cube computation approach, named p-MDAG. The p-MDAG approach is a parallel version of MDAG sequential approach. The sequential MDAG approach outperforms the classic Star approach in dense, skewed and sparse scenarios. In general, the sequential MDAG approach is 25-35% faster than Star, consuming, on average, 50% less memory to represent the same data cube. The p-MDAG approach improves the runtime while keeping the low memory consumption; it uses an attribute-based data cube decomposition strategy which combines both task and data parallelism. The p-MDAG approach uses the dimensions attribute values to partition the data cube. It also redesigns the MDAG sequential algorithms to run in parallel. The p-MDAG approach provides both good load balance and similar sequential memory consumption. Its logical design can be implemented in shared-memory, distributed-memory and hybrid architectures with minimal adaptation.

Paper Nr: 429
Title:

A PRACTICAL APPLICATION OF SOA - A Collaborative Marketplace

Authors:

Sophie Rousseau, Olivier Camp and Slimane Hammoudi

Abstract: The economical context greatly impacts companies and their Information Systems (IS). Companies have new competitors or develop new business skills, delocalize whole or part of their organization. Moreover they are faced with powerful competitors, and new products, fitting customer needs, must be developped fast, sometimes in less than 3 months. These companies’ ISs need to cope with such complex evolutions and have to overcome the resulting changes. Service-Oriented Architectures (SOA) are widely used by companies to gain in flexibility and Web services, by providing interoperability and loose coupling, is the fitted technical solution used to support SOA. Basic Web services are assembled into composite Web services in order to directly support business processes. The aim of this work is to present a practical and successful application of SOA towards the design of a collaborative marketplace. An implementation in the Oracle environment is achieved using the BPEL Process Manager allowing an integration of the different components involved in the design of a collaborative marketplace. In particular, this paper illustrates, through a practical example, how the SOA approach promotes interoperability, one of its main strengths, by integrating an open source ERP into a mainly ORACLE based software architecture.

Posters
Paper Nr: 1
Title:

INTERFACE USABILITY OF A VISUAL QUERY LANGUAGE FOR MOBILE GIS

Authors:

Haifa Elsidani Elariss and Souheil Khaddaj

Abstract: In recent years, many non-expert mobile applications have been deployed to query Geographic Information Systems (GIS) in particular Proximity Analysis that are concerned with the user who asks questions related to his current position by using a mobile phone. Thus, the new Iconic Visual Query Language (IVQL) has been developed and evaluated using a tourist application based on the map of Paris. The evaluation has been carried out to test the various usability aspects such as the expressive power of the language, the query formulation, and the user interface (GUI). The evaluation of the user interface that is hereby presented has been implemented through the user satisfaction of two subject groups, programmers and non-programmers. The results show that subjects found that the IVQL GUI has an excellent software, a good organization if icons, and is satisfying, with no significant difference between the two groups. The subjects also reported that they found the learning to operate the system easy, exploring new features easy, remembering the use of the icons easy, and performing tasks straightforward.

Paper Nr: 18
Title:

AGILITY BY VIRTUE OF ARCHITECTURE MATURITY

Authors:

Gouri Prakash

Abstract: This position paper is to demonstrate how architecture maturity combined with the degree of agility observed in software projects, results in four different types of projects, namely Experimental, Conservative, Ceremonious and Optimizing and what the characteristics of each of these project types are. The paper puts forth the position that project managers, should endeavour to elicit characteristics of software projects and match them with the characteristics of the four project types, discussed in this paper, before embarking on the project. By pursuing this approach, managers would be able to ascertain the benefits realized in pursuing a particular project type for a given project and the difference between how things are and how things should be and what factors can get them to a should-be position.

Paper Nr: 38
Title:

PERFORMANCE OVERHEAD OF PARAVIRTUALIZATION ON AN EXEMPLARY ERP SYSTEM

Authors:

André Bögelsack, Helmut Krcmar and Holger Wittges

Abstract: This paper addresses aspects of performance overhead when using paravirtualization techniques. To quantify the overhead the paper introduces and utilizes a new testing method, called the Zachmann test, to determine the performance overhead in a paravirtualized environment. The Zachmann test is used to perform CPU and memory intensive operations in a testing environment consisting of an exemplary Enterprise Resource Planning (ERP) system and a Xen hypervisor derivate. We focus on two issues: first, the performance overhead in general and second, the analysis of “overcommitment” situations. Our measurements show that the ERP system’s performance suffers up to 44% loss in virtualized environments compared to non-virtualized environments. Extreme overcommitment situations can lead to an overall performance loss up to 10%. This work shows the first results from a quantitative analysis.

Paper Nr: 40
Title:

INTELLIGENT DECISION-MAKING TIER IN SOFTWARE ARCHITECTURES

Authors:

D. Ong and S. Khaddaj

Abstract: Despite recent developments there are still many challenges in the application of intelligent control systems, as intelligent decision-making providers in constructing many software architectures which are required in many applications particularly in service-oriented computing. In this work we are particularly interested with the use of intelligent decision making mechanisms for the management of software architectures. Our research aims at designing and developing an intelligent tier which allows dynamic system architecture configuration and provisioning. The tier is based on a number of logical reasoning and learning processes which use historical data and a set of rules for its analysis and diagnosis in its attempt to offer a solution to a particular problem.

Paper Nr: 55
Title:

AN ARTIFACT-BASED ARCHITECTURE FOR A BETTER FLEXIBILITY OF BUSINESS PROCESSES

Authors:

Mounira Zerari and Mahmoud Boufaida

Abstract: Workflow management Technology has been applied in many enterprise information systems. Business processes provide a means of coordinating interaction between workers and organization in a structured way. However, traditional information systems struggle with requirement to provide flexibility due to the dynamic nature of the modern business environment. Accordingly, Adaptive Process Management Systems (PMSs) have emerged that provide some flexibility by enabling dynamic process change during run time. There are various ways in which flexibility can be achieved. One of these kinds of flexibility is flexibility by underspecification. This kind of flexibility is not supported (except YAWL) by current products. In addition, all approaches that currently exist not consider the context of execution of business process management. In this paper we propose an approach that supports flexibility by underspecification and consider context of the business process execution in runtime environment. The main idea is to consider activities as independent part of process. Each activity is encapsulated in an entity (artifact). The decision of which activity (module, component) will be executed depends on context environment and conditions execution. We will reason about the decision taken. We are motivated by make business processes easy to put together from reusable components and to reason on context execution.

Paper Nr: 93
Title:

REUSING APPROACH FOR SOFTWARE PROCESSES BASED ON SOFTWARE ARCHITECTURES

Authors:

Fadila Aoussat, Mohamed Ahmed Nacer and Mourad Oussalah

Abstract: Capitalizing and reusing the knowledge in the field of software process engineering is the objective of this work. In order to ensure a high quality for software process models, regarding to the specific needs of new development techniques and methods, we propose an approach based on two essential points: The Capitalization of the knowledge through a domain ontology, and the reusing of this knowledge across handling software process models as software architectures.

Paper Nr: 172
Title:

BALANCED TESTING SCORECARD - A Model for Evaluating and Improving the Performance of Software Testing Houses

Authors:

Renata Alchorne, Rafael Nóbrega, Gibeon Aquino, Silvio Meira and Alexandre Vasconcelos

Abstract: Many companies have invested in the process of testing to improve the test team's performance. Although Testing Maturity Models aim at tackling this issue, they are unpopular among the many of the highly competitive and innovative companies because they encourage displacing the true goals of the mission of “achieving a higher maturity level”. Moreover, they generally have the effect of blinding an organization to the most effective use of its resources, as they focus only on satisfying the requirements of the model. This article defines the Balanced Testing Scorecard (BTSC) model. This aims to evaluate and improve the test team’s performance. The model, based on Balanced Scorecard and Testing Maturity Models, is capable of aligning clients’ and financial objectives with testing maturity goals in order to improve the test team's performance and client’s performance. The model was based on and developed from the specialized literature and was applied in a software testing house as a case study.

Paper Nr: 211
Title:

IMPROVING DATA QUALITY IN DATA WAREHOUSING APPLICATIONS

Authors:

Lin Li, Taoxin Peng and Jessie Kennedy

Abstract: There is a growing awareness that high quality of data is a key to today’s business success and dirty data that exits within data sources is one of the reasons that cause poor data quality. To ensure high quality, enterprises need to have a process, methodologies and resources to monitor and analyze the quality of data, methodologies for preventing and/or detecting and repairing dirty data. However in practice, detecting and cleaning all the dirty data that exists in all data sources is quite expensive and unrealistic. The cost of cleaning dirty data needs to be considered for most of enterprises. Therefore conflicts may arise if an organization intends to clean their data warehouses in that how do they select the most important data to clean based on their business requirements. In this paper, business rules are used to classify dirty data types based on data quality dimensions. The proposed method will be able to help to solve this problem by allowing users to select the appropriate group of dirty data types based on the priority of their business requirements. It also provides guidelines for measuring the data quality with respect to different data quality dimensions and also will be helpful for the development of data cleaning tools.

Paper Nr: 220
Title:

A METRIC FOR RANKING HIGH DIMENSIONAL SKYLINE QUERIES

Authors:

Marlene Goncalves and Graciela Perera

Abstract: Skyline queries have been proposed to express user’s preferences. Since the size of Skyline set increases as the number of criteria augments, it is necessary to rank high dimensional Skyline queries. In this work, we propose a new metric to rank high dimensional Skylines which allows to identify the k most interesting objects from the Skyline set (Top-k Skyline). We have empirically studied the variability and performance of our metric. Our initial experimental results show that the metric is able to speed up the computation of the Top-k Skyline in up to two orders of magnitude w.r.t. the state-of-the-art metric: Skyline Frequency.

Paper Nr: 225
Title:

SOFTWARE QUALITY MANAGEMENT FLOSS TOOLS EVALUATION

Authors:

María Pérez, Edumilis Méndez, Kenyer Domínguez, Luis E. Mendoza and Cynthia De Oliveira

Abstract: Software Quality Management Tool (SQM) selection can be a quite challenge, as well as the precise definition of the functionalities that every tool of this kind should offer. On the other hand, establishing standards adequate to the FLOSS context, involves analyzing both commercial and proprietary software development paradigms in contrast to the FLOSS philosophy. This article presents the evaluation of 6 different tools (four proprietary and two FLOSS tools) through the application a Systemic Software Quality Model (MOSCA) instantiation according to the SQM tools context, aiming to illustrate an accurate method for selecting tools of this nature and identifying areas for improvement for each one of the evaluated tools.

Paper Nr: 227
Title:

MANAGEMENT OF RISK IN ENVIRONMENT OF DISTRIBUTED SOFTWARE DEVELOPMENT - Results of the Evaluation of a Model for Management of Risk in Distributed Software Projects

Authors:

Cirano Soares de Campos and Jorge Luis Nicolas Audy

Abstract: The objective of this article is to present the results of the evaluation of a model of management of risk for organizations that work with distributed software development – GeRDDoS. The model proposes the administration of risks properly aligned among the global unit (head office) and the distributed unit (branch) executor of the project, emphasizing that the success of the project depends on the success of the actions executed in both units. The model is an extension of the proposal of management of risks of the Software Engineering Institute – SEI, which shows a continuous and interactive process for the administration of risks, supported by coordination and communication process during the whole life cycle of the project. In that article the results of the application of the proposed model are presented, through the analysis of the results of a case study on the application of the model in a company for distributed software development located in Brazil.

Paper Nr: 240
Title:

ASPECTFX - A Framework for Supporting Collaborative Works in RIA by Aspect Oriented Approach

Authors:

Hiroaki Fukuda and Yoshikazu Yamamoto

Abstract: This paper presents AspectFX, a novel approach to enabling developers and designers to collaborate effectively in RIA development. Unlike traditional web applications, RIAs are implemented by a number of developers and designers; therefore it is reasonable to divide an application into modules and assign them to developers and designers, and collaborative works among them have been important. MVC architecture and OOP helps to divide an application into functional units as modules and bring efficiency to development processes. To play these modules as a single application, developers have to describe method invocations to utilize functionalities implemented in modules, however, developers need to describe additional method invocations that are not primary tasks for them. These additional method invocations make the dependencies among modules strong and these dependencies make it inefficient/difficult to implement and maintain an application. This paper describes the design and implementation of AspectFX that introduces aspect-oriented concept and considers the additional method invocations as cross-cutting concerns. AspectFX provides methods to separate the cross-cutting concerns from primary concerns and weaves them for playing them as an application.

Paper Nr: 267
Title:

TOWARDS AUTOMATIC GENERATION OF APPLICATION ONTOLOGIES

Authors:

Eveline R. Sacramento, Vânia M. P. Vidal, José Antônio F. de Macêdo, Bernadette F. Lóscio, Fernanda Lígia R. Lopes, Fernando Lemos and Marco A. Casanova

Abstract: In the Semantic Web, domain ontologies can provide the necessary support for linking together a large number of heterogeneous data sources. In our proposal, these data sources are describe as local ontologies using an ontology language. Then, each local ontology is rewritten as an application ontology, whose vocabulary is restricted to be a subset of the vocabulary of the domain ontology. Application ontologies enable the identification and the association of semantically corresponding concepts, so they are useful for enhancing tasks like data discovery and integration. The main contribution of this work is a strategy to automatically generate such application ontologies and mappings, considering a set of local ontologies, a domain ontology and the result of the matching between each local ontology and the domain ontology.

Paper Nr: 280
Title:

A UNIFIED WEB-BASED FRAMEWORK FOR JAVA CODE ANALYSIS AND EVOLUTIONARY AUTOMATIC TEST-CASES GENERATION

Authors:

Anastasis A. Sofokleous, Panayiotis Petsas and Andreas S. Andreou

Abstract: This paper describes the implementation and integration of code analysis and testing systems in a unified web-enabled framework. The former analyses basic programs written in Java and constructs the control-flow, data-flow and dependence graph(s), whereas the testing system collaborates with the analysis system to automatically generate and evaluate test-cases with respect to control flow and data flow criteria. The present work describes the design and implementation details of the framework and presents preliminary experimental results.

Paper Nr: 419
Title:

ERP INTEGRATION PROJECT - Assessing Impacts on Organisational Agents throught a Change Management Approach

Authors:

Clement Perotti, Stéphanie Minel, Benoît Roussel and Jean Renaud

Abstract: This paper investigates the effects of a large ERP deployment project on the organizational agents who use it within the framework of their activities. In this article, we first present some material showing that, in our case study, this project aims to standardize the company’s Information System (IS), and represents a change on both individual and organizational levels. Second, we go into detail of project management and more specifically, of change management within the framework of projects. Third, we advance some argument showing that “structured” change management approaches could be an efficient way to make project team deal with individual change in order to succeed in ERP deployment.

Paper Nr: 435
Title:

SYSTEM FOR STORAGE AND MANAGEMENT OF EEG/ERP EXPERIMENTS – Generation of Ontology

Authors:

Roman Mouček and Petr Ježek

Abstract: This paper shortly describes the system, which provides the possibility to store and manage data and metadata from EEG/ERP experiments. The system is planned to be registered as a source of neuroscience data and metadata. It is one of the reasons we need to provide the system ontology. The scientific papers often describe the domain by using a semantic web language and consider this kind of domain modelling as a crucial point of software solution. However, real software applications use up the underlying data structures such as relational database and object classes. That is why the fundamental differences in semantics between common data structures (relational database, object oriented code) were summarized. The existing tools in semantic web domain were studied and partially tested. The first transformations from the system relational database and object oriented code were performed.

Area 2 - Artificial Intelligence and Decision Support Systems

Full Papers
Paper Nr: 35
Title:

KBE TEMPLATE UPDATE PROPAGATION SUPPORT - Ontology and Algorithm for Update Sequence Computation

Authors:

Olivier Kuhn, Thomas Dusch, Parisa Ghodous and Pierre Collet

Abstract: This paper presents an approach to support Knowledge-Based Engineering template update propagation. Our aim is to provide engineers with a sequence of documents, giving the order in which they have to be processed to update them. To be able to compute a sequence, we need information about templates, Computer-Aided Design models and their relations. We designed an ontology for this purpose that will, after inferring new knowledge, provide a comprehensive knowledge about the templates and assemblies. This information is then used by a ranking algorithm that we have developed, which provides the sequence to follow to be able to update models efficiently without a deep analysis of the dependencies. This will prevent mistakes and save time as the analysis and choices are automatically computed.

Paper Nr: 62
Title:

COORDINATING EVOLUTION - Designing a Self-adapting Distributed Genetic Algorithm

Authors:

Nikolaos Chatzinikolaou

Abstract: In large scale optimisation problems, the aim is to find near-optimal solutions in very large combinatorial spaces. This learning/optimisation process can be aided by parallelisation, but it normally is difficult for engineers to decide in advance how to split the task into appropriate segments attuned to the agents working on them. This paper chooses a particular style of algorithm (a form of genetic algorithm) and describes a framework in which the parallelisation and tuning of the multi-agent system is performed automatically using a combination of self-adaptation of the agents plus sharing of negotiation protocols between agents. These GA agents are optimised themselves through the use of an evolutionary process of selection and recombination. Agents are selected according to the fitness of their respective populations, and during the recombination phase they exchange individuals from their population as well as their optimisation parameters, which is what lends the system its self-adaptive properties. This allows the execution of optimal optimisations without the burden of tuning the evolutionary process by hand. The architecture we use has been shown to be capable of operating in peer to peer environments, raising confidence in its scalability through the autonomy of its components.

Paper Nr: 75
Title:

IMPROVEMENT OF DIFFERENTIAL CRISP CLUSTERING USING ANN CLASSIFIER FOR UNSUPERVISED PIXEL CLASSIFICATION OF SATELLITE IMAGE

Authors:

Indrajit Saha, Dariusz Plewczynski, Ujjwal Maulik and Sanghamitra Bandyopadhyay

Abstract: An important approach to unsupervised pixel classification in remote sensing satellite imagery is to use clustering in the spectral domain. In particular, satellite images contain landcover types some of which cover significantly large areas, while some (e.g., bridges and roads) occupy relatively much smaller regions. Detecting regions or clusters of such widely varying sizes presents a challenging task. This fact motivated us to present a novel approach that integrates a differential evaluation based crisp clustering scheme with artificial neural networks (ANN) based probabilistic classifier to yield better performance. Real-coded encoding of the cluster centres is used for the differential evaluation based crisp clustering. The clustered solution is then used to find some points based on their proximity to the respective centres. The ANN classifier is thereafter trained by these points. Finally, the remaining points are classified using the trained classifier. Results demonstrating the effectiveness of the proposed technique are provided for several synthetic and real life data sets. Also statistical significance test has been performed to establish the superiority of the proposed technique. Moreover, one remotely sensed image of Bombay city has been classified using the proposed technique to establish its utility.

Paper Nr: 140
Title:

ConTask - Using Context-sensitive Assistance to Improve Task-oriented Knowledge Work

Authors:

Jan Haas, Heiko Maus, Sven Schwarz and Andreas Dengel

Abstract: The paper presents an approach to support knowledge-intensive tasks with a context-sensitive task management system that is integrated into the user's personal knowledge space represented in the Nepomuk Semantic Desktop. The context-sensitive assistance is based on the combination of user observation, agile task modelling, automatic task prediction, as well as elicitation and proactive delivery of relevant information items from the knowledge worker's personal knowledge space.

Paper Nr: 146
Title:

IDENTIFICATION OF AREAS WITH SIMILAR WIND PATTERNS USING SOFM

Authors:

J. C Palomares Salas, A. Agüera Pérez, J. J. G. de la Rosa and J. G. Ramiro

Abstract: In this paper it is shown a process to demarcate areas with analogous wind conditions. For this purpose a dispersion graph between wind directions will be traced for all stations placed in the studied zone. These distributions will be compared among themselves using the centroids extracted with SOFM algorithm. This information will be used to build a matrix, letting us work with all relations simultaneously. By permutation of elements in this matrix it is possible to group relationed stations.

Paper Nr: 191
Title:

VISUAL TREND ANALYSIS - Ontology Evolution Support for the Trend Related Industry Sector

Authors:

Jessica Huster and Andreas Becks

Abstract: Ontologies are used as knowledge bases to exchange, extract and integrate information in information retrieval and search. They provide a shared and common understanding that reaches across people and application systems. In reality, domain specific and technical knowledge evolve over time, and so must ontologies. Creative domains, as for example the home textile industry, are representatives for quickly evolving domains. In this domain it is also important to provide methodologies for the visualisation of knowledge evolution. In this paper we report on our ontology-based trend analysis tool, which supports marketing experts and designers to identify trend drifts, and to compare the analysis results against the ontology. Furthermore means to adapt and evolve the ontology in accordance with the changing domain are provided.

Paper Nr: 204
Title:

TOWARDS THE FORMALISATION OF THE TOGAF CONTENT METAMODEL USING ONTOLOGIES

Authors:

Aurona Gerber, Paula Kotzé and Alta van der Merwe

Abstract: Metamodels are abstractions that are used to specify characteristics of models. Such metamodels are generally included in specifications or framework descriptions. A metamodel is for instance used to inform the generation of enterprise architecture content in the Open Group’s TOGAF 9 Content Metamodel description. However. the description of metamodels is usually done in an ad-hoc manner with customised languages and this often results in ambiguities and inconsistencies. We are concerned with the question of how the quality of metamodel descriptions, specifically within the enterprise architecture domain, could be enhanced. Therefore we investigated whether formal ontology technologies could be used to enhance metamodel construction, specification and design. For this research, we constructed a formal ontology for the TOGAF 9 Content Metamodel, and in the process, gained valuable insight into metamodel quality. In particular, the current TOGAF 9 Content Metamodel contains ambiguities and inconsistencies, which could be eliminated using ontology technologies. In this paper we argue for the integration of formal ontologies and ontology technologies as tools into metamodel construction and specification. Ontologies allow for the construction of complex conceptual models, but more significant, ontologies can assist an architect by depicting all the consequences of a model, allowing for more precise and complete artifacts within enterprise architectures, and because these models use standardized languages, they should promote integration and interoperability.

Paper Nr: 205
Title:

THESAURUS BASED SEMANTIC REPRESENTATION IN LANGUAGE MODELING FOR MEDICAL ARTICLE INDEXING

Authors:

Jihen Majdoubi, Mohamed Tmar and Faiez Gargouri

Abstract: Language modeling approach plays an important role in many areas of natural language processing including speech recognition, machine translation, and information retrieval. In this paper, we propose a contribution for conceptual indexing of medical articles by using the MeSH (Medical Subject Headings) thesaurus, then we propose a tool for indexing medical articles called SIMA (System of Indexing Medical Articles) which uses a language model to extract the MeSH descriptors representing the document. To assess the relevance of a document to a MeSH descriptor, we estimate the probability that the MeSH descriptor would have been generated by language model of this document.

Paper Nr: 248
Title:

EXTENDING MAS-ML TO MODEL PROACTIVE AND REACTIVE SOTWARE AGENTS

Authors:

Enyo José Tavares Gonçalves, Mariela I. Cortés, Gustavo A. L. de Campos and Viviane Torres da Silva

Abstract: The existence of Multi Agent System (MAS) where agents with different internal architectures interact to achieve their goals promotes the need for a language capable of modeling these applications. In this context we highlight MAS-ML, a MAS modeling language that performs a conservative extension of UML while incorporating agent-related concepts. Nevertheless MAS-ML was developed to support pro-active agents. This paper aims to extend MAS-ML to support the modelling of not only proactive but also reactive agents based on the architectures described in the literature.

Paper Nr: 257
Title:

REFINING THE TRUSTWORTHINESS ASSESSMENT OF SUPPLIERS THROUGH EXTRACTION OF STEREOTYPES

Authors:

Joana Urbano, Ana Paula Rocha and Eugénio Oliveira

Abstract: Trust management is nowadays considered a promising enabler technology to extend the automation of the supply chain to the search, evaluation and selection of suppliers located world-wide. Current agent-based Computational Trust and Reputation (CTR) systems concern the representation, dissemination and aggregation of trust evidences for trustworthiness assessment, and some recent proposals are moving towards situation-aware solutions that allow the estimation of trust when the information about a given supplier is scarce or even null. However, these enhanced, situation-aware proposals rely on ontology-like techniques that are not fine grained enough to detect light, but relevant, tendencies on supplier’s behaviour. In this paper, we propose a technique that allows the extraction of positive and negative tendencies of suppliers in the fulfilment of established contracts. This technique can be used with any of the existing “traditional” CTR systems, improving their ability in selectively selecting a partner based on the characteristics of the situation in evaluation. In this paper, we test our proposal using an aggregation engine that embeds important properties of the dynamics of trust building.

Paper Nr: 260
Title:

FONTE - A Protégé Plugin for Engineering Complex Ontologies by Assembling Modular Ontologies of Space, Time and Domain Concepts

Authors:

Jorge Santos, Luís Braga and Anthony Cohn

Abstract: Humans have a natural ability to reason about scenarios including spatial and temporal information but for several reasons the process of developing complex ontologies including time and/or space is still not well developed and it remains a one-off, labor intensive experience. In this paper we present FONTE (Factorising ONTology Engineering complexity), an ontology engineering methodology that relies on a divide and conquer strategy. The targeted complex ontology will be built by assembling modular ontologies that capture temporal, spatial and domain (atemporal and aspatial) aspects. In order to support the proposed methodology we developed a plugin for Protégé, one of the most widely used open source ontology editor and knowledge-base framework.

Paper Nr: 275
Title:

AN AGENT-BASED APPROACH TO CONSUMER´S LAW DISPUTE RESOLUTION

Authors:

Nuno Costa, Davide Carneiro, Paulo Novais, Diovana Barbieri and Francisco Andrade

Abstract: Buying products online results in a new type of trade which the traditional legal systems are not ready to deal with. Besides that, the increase in B2C relations led to a growing number of consumer claims and many of these are not getting a satisfactory response. New approaches that do not include traditional litigation are needed, having in consideration not only the slowness of the judicial system, but also the cost/beneficial relation in legal procedures. This paper points out to an alternative way of solving these conflicts online, using Information Technologies and Artificial Intelligence methodologies. The work here presented results in a consumer advice system, which fastens and makes easier the conflict resolution process both for consumers and for legal experts.

Paper Nr: 296
Title:

MINING TIMED SEQUENCES WITH TOM4L FRAMEWORK

Authors:

Nabil Benayadi and Marc Le Goc

Abstract: We introduce the problem of mining sequential patterns in large database of sequences using a Stochastic Approach. An example of patterns we are interested in is : 50% of cases of engine stops in the car are happened between 0 and 2 minutes after observing a lack of the gas in the engine, produced between 0 and 1 minutes after the fuel tank is empty. We call this patterns “signatures”. Previous research have considered some equivalent patterns, but such work have three mains problems : (1) the sensibility of their algorithms with the value of their parameters, (2) too large number of discovered patterns, and (3) their discovered patterns consider only ”after“ relation (succession in time) and omit temporal constraints between elements in patterns. To address this issue, we present TOM4L process (Timed Observations Mining for Learning process) which uses a stochastic representation of a given set of sequences on which an inductive reasoning coupled with an abductive reasoning is applied to reduce the space search. The results obtained with an application on very complex real world system are also presented to show the operational character of the TOM4L process.

Paper Nr: 336
Title:

STATISTICAL ASSOCIATIVE CLASSIFICATION OF MAMMOGRAMS - The SACMiner Method

Authors:

Carolina Y. V. Watanabe, Marcela X. Ribeiro, Caetano Traina Jr. and Agma J. M. Traina

Abstract: In this paper, we present a new method called SACMiner for mammogram classification using statistical association rules. The method employs two new algorithms the StARMiner* and the Voting classifier (V-classifier). StARMiner* mines association rules over continuous feature values, avoiding introducing bottleneck and inconsistencies in the learning model due to a discretization step. The V-classifier decides which class best represents a test image, based on the statistical association rules mined. The experiments comparing SACMiner with other traditional classifiers in detecting breast cancer in mammograms show that the proposed method reaches higher values of accuracy, sensibility and specificity. The results indicate that SACMiner is well-suited to classify mammograms. Moreover, the proposed method has a low computation cost, being linear on the number of dataset items, when compared with other classifiers. Furthermore, SACMiner is extensible to work with other types of medical images.

Paper Nr: 346
Title:

MODELING LARGE SCALE MANUFACTURING PROCESS FROM TIMED DATA - Using the TOM4L Approach and Sequence Alignment Information for Modeling STMicroelectronics’ Production Processes

Authors:

Pamela Viale, Nabil Benayadi, Marc Le Goc and Jacques Pinaton

Abstract: Modeling manufacturing process of complex products like electronic chips is crucial to maximize the quality of the production. The Process Mining methods developed since a decade aims at modeling such manufacturing process from the timed messages contained in the database of the supervision system of this process. Such process can be complex making difficult to apply the usual Process Mining algorithms. This paper proposes to apply the TOM4L Approach (Timed Observations Mined for Learning) to model large scale manufacturing processes. A series of timed messages is considered as a sequence of class occurrences and is represented with a Markov chain from which models are deduced with an abductive reasoning. Because sequences can be very long, a notion of process phase is introduced. Sequences are cut based on the concept of class of equivalence and information obtained from an appropiate alignment between the sequences considered. A model for each phase can then be locally produced. The model of the whole manufacturing process is obtained from the concatenation of the models of the different phases. This paper presents the application of this method to model STMicroelectronics’ manufacturing processes. STMicroelectronics’ interest in modeling its manufacturing processes is based on the necessity to detect the discrepancies between the real processes and experts’ definitions of them.

Paper Nr: 365
Title:

A HIERARCHICAL HANDWRITTEN OFFLINE SIGNATURE RECOGNITION SYSTEM

Authors:

Ioana Bărbănţan, Camelia Lemnaru and Rodica Potolea

Abstract: This paper presents an original approach for solving the problem of offline handwritten signature recognition, and a new hierarchical, data-partitioning based solution for the recognition module. Our approach tackles the problem we encountered with an earlier version of our system when we attempted to increase the number of classes in the dataset: as the complexity of the dataset increased, the recognition rate dropped unacceptably for the problem considered. The new approach employs a data partitioning strategy to generate smaller sub-problems, for which the induced classification model should attain better performance. Each sub-problem is then submitted to a learning method, to induce a classification model in a similar fashion with our initial approach. We have performed several experiments and analyzed the behavior of the system by increasing the number of instances, classes and data partitions. We continued using the Naïve Bayes classifier for generating the classification models for each data partition. Overall, the classifier performs in a hierarchical way: a top level for data partitioning via clustering and a bottom level for classification sub-model induction, via the Naïve Bayes classifier. Preliminary results indicate that this is a viable strategy for dealing with signature recognition problems having a large number of persons.

Paper Nr: 379
Title:

EVALUATING PREDICTION STRATEGIES IN AN ENHANCED META-LEARNING FRAMEWORK

Authors:

Silviu Cacoveanu, Camelia Lemnaru and Rodica Potolea

Abstract: Finding the best learning strategy for a new domain/problem can prove to be an expensive and time-consuming process even for the experienced analysts. This paper presents several enhancements to a meta-learning framework we have previously designed and implemented. Its main goal is to automatically identify the most reliable learning schemes for a particular problem, based on the knowledge acquired about existing data sets, while minimizing the work done by the user but still offering flexibility. The main enhancements proposed here refer to the addition of several classifier performance metrics, including two original metrics, for widening the evaluation criteria, the addition of several new benchmark data sets for improving the outcome of the neighbor estimation step, and the integration of complex prediction strategies. Systematic evaluations have been performed to validate the new context of the framework. The analysis of the results revealed new research perspectives in the meta-learning area.

Paper Nr: 439
Title:

EFFECTIVE ANALYSIS OF FLEXIBLE COLLABORATION PROCESSES BY WAY OF ABSTRACTION AND MINING TECHNIQUES

Authors:

Alfredo Cuzzocrea, Francesco Folino and Luigi Pontieri

Abstract: A knowledge-based framework for supporting and analyzing loosely-structured collaborative processes (LSCPs) is presented in this paper. The framework takes advantages from a number of knowledge representation, management and processing capabilities, including recent process mining techniques. In order to support the enactment, analysis and optimization of LSCPs in an Internet-worked virtual scenario, we illustrate a flexible integration architecture, coupled with a knowledge representation and discovery environment, and enhanced by ontology-based knowledge processing capabilities. In particular, an approach for restructuring logs of LSCPs is proposed, which allows to effectively analyze LSCPs at varying abstraction levels with process mining techniques (originally devised to analyze well-specified and well-structured workflow processes). The capabilities of the proposed framework were experimentally tested on several application contexts. Interesting results that concern the experimental analysis of collaborative manufacturing processes across a distributed CAD platform are shown.

Short Papers
Paper Nr: 14
Title:

DIAGNOSIS OF ACTIVE SYSTEMS BY LAZY TECHNIQUES

Authors:

Gianfranco Lamperti and Marina Zanella

Abstract: In society, laziness is generally considered as a negative feature, if not a capital fault. Not so in computer science, where lazy techniques are widespread, either to improve efficiency or to allow for computation of unbounded objects, such as infinite lists in modern functional languages. We bring the idea of lazy computation to the context of model-based diagnosis of active systems. Up to a decade ago, all approaches to diagnosis of discrete-event systems required the generation of the global system model, a technique that is impractical when the system is large and distributed. To overcome this limitation, a lazy approach was then devised in the context of diagnosis of active systems, which works with no need for the global system model. However, a similar drawback arose a few years later, when uncertain temporal observations were proposed. In order to reconstruct the system behavior based on an uncertain observation, an index space is generated as the determinization of a nondeterministic automaton derived from the graph of the uncertain observation, the prefix space. The point is that the prefix space and the index space suffer from the same computational difficulties as the system model. To confine the explosion of memory space when dealing with diagnosis of active systems with uncertain observations, a laziness-based, circular-pruning technique is presented. Experimental results offer evidence for the considerable effectiveness of the approach, both in space and time reduction.

Paper Nr: 15
Title:

ALGORITHM APPLIED IN THE PRIORITISATION OF THE STAKEHOLDERS

Authors:

Anna María Gil Lafuente and Luciano Barcellos de Paula

Abstract: According to scientific studies, relationships with all stakeholders and addressing all issues related to sustainability in business is neither possible nor desirable. The company should seek to establish an order of priorities for the stakeholders and issues to ensure good management of time, resources and expectations. Based on Stakeholder Theory we discuss the importance of management with stakeholders in the pursuit of sustainability in enterprises. In this paper we will focus our research on the prioritisation of the stakeholders through an analysis of an empirical study by a consulting firm in Brazil. In this case, the company needs to establish priorities for its stakeholders. To achieve this objective, the consultant hired has used fuzzy logic algorithm, applying the P-Latin Composition. To complete the study, we present the contributions, the empirical results and conclusions of our investigation.

Paper Nr: 37
Title:

CLASSIFICATION OF MARKET NEWS AND PREDICTION OF MARKET TRENDS

Authors:

P. Kroha and R. Nienhold

Abstract: In this contribution we present our results concerning the influence of news on market trends. We processed stock news with a grammar-driven classification. We found some potentialities of market trend prediction and present promising experimental results. As a main result we present the fact that there are two points (altogether only two) representing the minimum of good news/bad news relation for two long-term trends breaking points (altogether only two) during the last 10 years and that both of these points representing news appear in both cases about six months before the long-term trend of markets changed.

Paper Nr: 100
Title:

USING KEY PERFORMANCE INDICATORS TO FACILITATE THE STRATEGY IMPLEMENTATION AND BUSINESS PROCESS IMPROVEMENT IN SME’S

Authors:

Miguel Merino Gil and David Neila Sousa

Abstract: The use of metrics for a wide range of issues in business management is usual in big organizations but don’t use frequently in business in small size enterprises. Business Intelligence (BI) is seen largely as the domain of large organizations, yet any size business can benefit greatly from BI tools. To date, the cost of BI technology has been too high for small businesses and limited scope of market operations and financial reports. We develop a method for successful BI implementation in the SME'S on the basis of its performance indicators based upon business process monitoring and constructing a plan of action (Business Initiatives) to carry out a defined strategy by targeted objectives.

Paper Nr: 105
Title:

CONTEXT-POLICY-CONFIGURATION - Paradigm of Intelligent Autonomous System Creation

Authors:

Oleksiy Khriyenko, Sergiy Nikitin and Vagan Terziyan

Abstract: Next generation of integration systems will utilize different methods and techniques to achieve the vision of ubiquitous knowledge: Semantic Web and Web Services, Agent Technologies and Mobility. Nowadays, unlimited interoperability and collaboration are the important things for industry, business, education and research, health and wellness, and other areas of people life. All the parties in a collaboration process have to share data as well as information about actions they are performing. During the last couple of years, policies have gained attention both in research and industry. Policies are considered as an appropriate means for controlling the behaviour of complex systems and are used in different domains for automating system administration tasks (configuration, security, or Quality of Service (QoS) monitoring and assurance). The paper presents Semantic Web driven approach for context-aware policy-based system configuration. Proposed Context-Policy-Configuration approach for creation of intelligent autonomous systems allows system behaviour modification without source code change or requiring information about the dependencies of the components being governed. The system can continuously be adjusted to externally imposed constraints by policy determination and change.

Paper Nr: 144
Title:

NORMALIZATION PROCEDURES ON MULTICRITERIA DECISION MAKING - An Example on Environmental Problems

Authors:

Ethel Mokotoff, Estefanía García-Vázquez and Joaquín Pérez Navarro

Abstract: In Multicriteria Decision Making, a normalization procedure is required to conduct the aggregation process. Even though this methodology is widely applied in strategic decision support systems, scarce published papers detail this specific question. In this paper, we analyze the results of the influence of normalization procedures in the weight sum aggregation in Multicriteria Decision problems devoted to sustainable development.

Paper Nr: 153
Title:

A REVIEW OF LEARNING METHODS ENHANCED IN STRATEGIES OF NEGOTIATING AGENTS

Authors:

Marisa Masvoula, Panagiotis Kanellis and Drakoulis Martakos

Abstract: Advancement of Artificial Intelligence has contributed in the enhancement of agent strategies with learning techniques. We provide an overview of learning methods that form the core of state-of-the art negotiators. The main objective is to facilitate the comprehension of the domain by framing current systems with respect to learning objectives and phases of application. We also aim to reveal current trends, virtues and weaknesses of applied methods.

Paper Nr: 156
Title:

MALE AND FEMALE CHROMOSOMES IN GENETIC ALGORITHMS

Authors:

Ghodrat Moghadampour

Abstract: Evolutionary algorithms work on randomly generated populations, which are converged over runs toward the desired optima. Randomly generated populations are of different qualities based on their average fitness values. In many cases switching all bits of a randomly generated binary individual to their opposite values might quickly produce a better individual. This technique increases diversity among individuals in the population and allows exploring the search space in a more rigorous way. In this research the effect of such operation during the initialization of the population and crossover operator has been investigated. Experimentation with 44 test problems in 2200 runs showed that this technique can facilitate producing better individuals on average in around 32% of cases.

Paper Nr: 157
Title:

EFFICIENT LEARNING OF DYNAMIC BAYESIAN NETWORKS FROM TIMED DATA

Authors:

Ahmad Ahdab and Marc Le Goc

Abstract: This paper addresses the problem of learning a Dynamic Bayesian network from timed data without prior knowledge to the system. One of the main problems of learning a Dynamic Bayesian network is building and orienting the edges of the network avoiding loops. The problem is more difficult when data are timed. This paper proposes an algorithm based on an adequate representation of a set of sequences of timed data and uses an information based measure of the relations between two edges. This algorithm is a part of the Timed Observation Mining for Learning (TOM4L) process that is based on the Theory of the Timed Observations. The paper illustrates the algorithm with an application on the Apache system of the Arcelor-Mittal Steel Group, a real world knowledge based system that diagnoses a galvanization bath.

Paper Nr: 170
Title:

DECISIONWAVE - Embedding Collaborative Decision Making into Business Processes

Authors:

Christoph Krammer and Alexander Mädche

Abstract: DecisionWave is a platform to allow distributed teams to jointly take decisions within business processes. The focus of this work is to provide the conceptual architecture and a prototype for a seamless integration of communication support for group decisions into existing business processes within operational information systems. The DecisionWave system can then be used to document the decision process and generate templates for future decisions of the same type. Together with the architecture, we built a prototype using the salesforce.com CRM system and the Google Wave communication infrastructure.

Paper Nr: 171
Title:

TRAINING A FUZZY SYSTEM IN WIND CLIMATOLOGIES DOWNSCALING

Authors:

A. Agüera, J. J. G. de la Rosa, J. G. Ramiro and J. C. Palomares

Abstract: The wind climate measured in a point is usually described as the result of a regional wind climate forced by local effects derived from topography, roughness and obstacles in the surrounding area. This paper presents a method that allows to use fuzzy logic to generate the local wind conditions caused by these geographic elements. The fuzzy systems proposed in this work are specifically designed to modify a regional wind frequency rose attending to the terrain slopes in each direction. In order to optimize these fuzzy systems, Genetic Algorithms will act improving an initial population and, eventually, selecting the one which produce the best aproximation to the real measurements.

Paper Nr: 177
Title:

AN INTELLIGENT FRAMEWORK FOR AUTOMATIC EVENT DETECTION IN ROBOTIC SOCCER GAMES - An Auxiliar Tool to Help Coaches Improve their Teams’ Performance

Authors:

João Portela, Pedro Abreu, Luís Paulo Reis, Eugénio Oliveira and Julio Garganta

Abstract: In soccer, the level of performance is determined by a number of a complex variables interrelated: technique, tactics, psychological factors and finally, fitness. Because of this, analyzing this information in a real-time, even for soccer experts like professional coaches has become an impossible task. Automatic event detection tools occupy an important role in this reality, although nowadays there isn’t any tool capable of producing information capable of helping a professional coach choosing his team strategy for a specific game. In this research project an automatic event detection tool is purposed and, a set of game statistics defined by a group of sports researchers. All the teams present in the 2009 RoboCup tournament have a pass success rate superior to 65%. These statistics provide an interesting viewpoint on how to evaluate a team performance, such as the importance of dominating the opposing team field without losing the control of our own (this can be seen on the top 3 zone dominance statistics). In the future this project will serve as a base for building a Framework capable of simulating a match between two heterogeneous soccer teams and produce reliable information for optimizing the team performance.

Paper Nr: 196
Title:

AUTOMATIC SEARCH-BASED TESTING WITH THE REQUIRED K-TUPLES CRITERION

Authors:

Anastasis A. Sofokleous, Andria Krokou and Andreas S. Andreou

Abstract: This paper examines the use of data flow criteria in software testing and uses evolutionary algorithms to automate the generation of test data with respect to the required k-tuples criterion. The proposed approach is incorporated into an existing test data generation framework consisting of a program analyzer and a test data generator. The former analyses JAVA programs, creates control and data flow graphs, generates paths in relation to data flow dependencies, simulates test cases execution and determines code coverage on the control flow graphs. The test data generator takes advantage of the program analyzer capabilities and generates test cases by utilizing a series of genetic algorithms. The performance of the framework is compared to similar methods and evaluated using both standard and randomly generated JAVA programs. The preliminary results demonstrate the efficacy and efficiency of this approach.

Paper Nr: 198
Title:

SENTENCE SIMILARITY MEASURES TO SUPPORT WORKFLOW EXCEPTION HANDLING

Authors:

A. Aldeeb, D. M. Pearce, K. Crockett and M. J. Stanton

Abstract: Exceptions occurrence in workflow systems is common. Searching in the past exceptions handlers’ records, looking for any similar exception serves as good sources in designing the solution to resolve the exception at hand. In the literature, there are three approaches to retrieve similar workflow exception records from the knowledge base. These approaches are keyword-based approach, concept hierarchies approach and pattern matching retrieval system. However, in a workflow domain, exceptions are often described by workflow participants as a short text using natural language rather than a set of user-defined keywords. Therefore, the above mentioned approaches are not effective in retrieval of relevant information. The proposed approach considers the semantic similarity between the workflow exceptions rather than term-matching schemes, taking account of semantic information and word order information implied in the sentence. Our findings show that sentence similarity measures are capable of supporting the retrieval of relevant information in workflow exception handling knowledge. This paper presents a novel approach to apply sentence similarity measures within the case-based reasoning methodology in workflow exception handling. A data set, comprising of 76 sentence pairs representing instance level workflow exceptions are tested and the results show significant correlation between the automated similarity measures and the human domain expert intuition.

Paper Nr: 207
Title:

PERFORMANCE GAIN FOR CLUSTERING WITH GROWING NEURAL GAS USING PARALLELIZATION METHODS

Authors:

Alexander Adam, Sebastian Leuoth, Sascha Dienelt and Wolfgang Benn

Abstract: The amount of data in databases is increasing steadily. Clustering this data is one of the common tasks in Knowledge Discovery in Databases (KDD). For KDD purposes, this means that many algorithms need so much time, that they become practically unusable. To counteract this development, we try parallelization techniques on that clustering. Recently, new parallel architectures have become affordable to the common user. We investigated especially the GPU (Graphics Processing Unit) and multi-core CPU architectures. These incorporate a huge amount of computing units paired with low latencies and huge bandwidths between them. In this paper we present the results of different parallelization approaches to the GNG clustering algorithm. This algorithm is beneficial as it is an unsupervised learning method and chooses the number of neurons needed to represent the clusters on its own.

Paper Nr: 210
Title:

HOURLY PREDICTION OF ORGAN FAILURE AND OUTCOME IN INTENSIVE CARE BASED ON DATA MINING TECHNIQUES

Authors:

Marta Vilas-Boas, Manuel Filipe Santos, Filipe Portela, Álvaro Silva and Fernando Rua

Abstract: The use of Data Mining techniques makes possible to extract knowledge from high volumes of data. Currently, there is a trend to use Data Mining models in the perspective of intensive care to support physicians’ decision process. Previous results used offline data for the predicting organ failure and outcome for the next day. This paper presents the INTCare system and the recently generated Data Mining models. Advances in INTCare led to a new goal, prediction of organ failure and outcome for the next hour with data collected in real-time in the Intensive Care Unit of Hospital Geral de Santo António, Porto, Portugal. This experiment used Artificial Neural Networks, Decisions Trees, Logistic Regression and Ensemble Methods and we have achieved very interesting results, having proven that it is possible to use real-time data from the Intensive Care Unit to make highly accurate predictions for the next hour. This is a great advance in terms of intensive care, since predicting organ failure and outcome on an hourly basis will allow intensivists to have a faster and pro-active attitude in order to avoid or reverse organ failure.

Paper Nr: 216
Title:

USING NATURAL LANGUAGE PROCESSING FOR AUTOMATIC EXTRACTION OF ONTOLOGY INSTANCES

Authors:

Carla Faria, Rosario Girardi, Ivo Serra, Maria Macedo and Djefferson Maranhão

Abstract: Ontologies are used by modern knowledge-based systems to represent and share knowledge about an application domain. Ontology population looks for identifying instances of concepts and relationships of an ontology. Manual population by domain experts and knowledge engineers is an expensive and time consuming task so, automatic and semi-automatic approaches are needed. This article proposes an initial approach for automatic ontology population from textual sources that use natural language processing and machine learning techniques. Some experiments using a family law corpus were conducted in order to evaluate it. Initial results are promising and indicate that our approach can extract instances with good effectiveness.

Paper Nr: 234
Title:

A SOFTWARE SYSTEM FOR DATA INTEGRATION AND DECISION SUPPORT FOR EVALUATION OF AIR POLLUTION HEALTH IMPACT

Authors:

Daniele Toscani, Federica Bargna, Luigi Quarenghi, Francesco Archetti and Ilaria Giordani

Abstract: In this paper we present a software system for decision support (DSS – Decision Support System) aimed at forecasting high demand of admission on health care structures due to environmental pollution. The algorithmic kernel of the system is based on machine learning, the software architecture is such that both persistent and sensor data are integrated through a data integration infrastructure. Given the actual concentration of different pollutants, measured by a network of sensors, the DSS allows forecasting the demand of hospital admissions for acute diseases in the next 1 to 6 days. We tested our system on cardio-vascular and respiratory diseases in the area of Milan.

Paper Nr: 238
Title:

EXPERT KNOWLEDGE MANAGEMENT BASED ON ONTOLOGY IN A DIGITAL LIBRARY

Authors:

Antonio Martín and Carlos León

Abstract: The architecture of the future Digital Libraries should be able to allow any users to access available knowledge resources from anywhere and at any time and efficient manner. Moreover to the individual user, there is a great deal of useless information in addition to the substantial amount of useful information. The goal is to investigate how to best combine Artificial Intelligent and Semantic Web technologies for semantic searching across largely distributed and heterogeneous digital libraries. The Artificial Intelligent and Semantic Web have provided both new possibilities and challenges to automatic information processing in search engine process. The major research tasks involved are to apply appropriate infrastructure for specific digital library system construction, to enrich metadata records with ontologies and enable semantic searching upon such intelligent system infrastructure. We study improving the efficiency of search methods to search a distributed data space like a Digital Library. This paper outlines the development of a Case-Based Reasoning prototype system based in an ontology for retrieval information in the Digital Library University of Seville. The results demonstrate that by incorporating ontologies and the use of expert systems into the search process, the effectiveness of the information retrieval is enhanced.

Paper Nr: 242
Title:

A THREE LEVEL ABSTRACTION HIERARCHY TO REPRESENT PRODUCT STRUCTURAL INFORMATION

Authors:

Marcela Vegetti, Horacio Leone and Gabriela P. Henning

Abstract: Product models should integrate and efficiently manage all the information associated with products in the context of industrial enterprises or supply chains (SCs). Nowadays, it is quite common for an organization and even, each area within a company, to have its own product model. This situation leads to information duplication and its associated problems. In addition, traditional product models do not properly handle the high number of variants managed in today competitive markets. Therefore, there is a need for an integrated product model to be shared by the organizations participating in global SCs or all areas within a company. One way to reach an intelligent integration among product models is by means of an ontology. PRONTO (PRoduct ONTOlogy) is an ontology for the Product Modelling domain, able to efficiently handle product variants. This contribution presents a ConceptBase formalization of PRONTO, as well as an extension of it that allows the inference of product structural knowledge and the specification of valid products.

Paper Nr: 272
Title:

ASSESSMENT OF THE CHANGE IN THE NUMBER OF NEURONS IN HIDDEN LAYERS OF NEURAL NETWORKS FOR FAULT IDENTIFICATION IN ELECTRICAL SYSTEMS

Authors:

David Targueta da Silva, Pedro Henrique Gouvêa Coelho, Joaquim Augusto Pinto Rodrigues and Luiz Biondi Neto

Abstract: This work describes performance evaluation of ANNs (Artificial Neural Networks) used to identify faults in electrical systems for several number of neurons in the hidden layers. The number of neurons in the hidden layers depends on the complexity of the problem to be represented. Currently, there are no reliable rules for determining, a priori, the number of hidden neurons, so that such number depends largely on the experience of the practitioners who are designing the ANNs. This paper reports experiment results using neural networks varying the number of hidden neurons to aid the neural network user to find an adequate configuration in terms of number of neurons in the hidden layers so that ANNs be more efficient particularly for fault identification applications.

Paper Nr: 307
Title:

SWARM INTELLIGENCE FOR RULE DISCOVERY IN DATA MINING

Authors:

Andre B. de Carvalho, Taylor Savegnago and Aurora Pozo

Abstract: This paper aims to discuss Swarm Intelligence approaches for Rule Discovery in Data Mining. The first approach is a new rule learning algorithm based on Particle Swarm optimization (PSO) and that uses a Multiobjective technique to conceive a complete novel approach to induce classifiers, called MOPSO-N. In this approach the properties of the rules can be expressed in different objectives and then the algorithm finds these rules in an unique run by exploring Pareto dominance concepts. The second approach, called PSO/ACO2 algorithm, uses a hybrid technique combining Particle Swarm Optimization and Ant Colony Optimization. Both approaches directly deal with continuous and nominal attribute values, a feature that current bioinspired rule induction algorithms lack. In this work, an experiment is performed to evaluated both approaches by comparing the performance of the induced classifiers.

Paper Nr: 322
Title:

NARFO* ALGORITHM - Optimizing the Process of Obtaining Non-redundant and Generalized Semantic Association Rules

Authors:

Rafael Garcia Miani, Cristiane Akemi Yaguinuma, Marilde Terezinha Prado Santos and Vinícius Ramos Toledo Ferraz

Abstract: This paper proposes the NARFO* algorithm, an algorithm for mining non-redundant and generalized association rules based on fuzzy ontologies. The main contribution of this work is to optimize the process of obtaining non-redundant and generalized semantic association rules by introducing the minGen (Minimal Generalization) parameter in the latest version of NARFO algorithm. This parameter acts on generalize rules, especially the ones with low minimum support, preserving their semantic and eliminating redundancy, thus reducing considerably the amount of generated rules. Experiments showed that NARFO* produces semantic rules, without redundancy, obtaining 68,75% and 55,54% of reduction in comparison with XSSDM algorithm and NARFO algorithm, respectively.

Paper Nr: 363
Title:

MODEL OF KNOWLEDGE SPREADING FOR MULTI-AGENT SYSTEMS

Authors:

D. Oviedo, M. C. Romero-Ternero, M. D. Hernández, A. Carrasco, F. Sivianes and J. I. Escudero

Abstract: This paper presents a model to spread knowledge in multiagent-based control systems, where simplicity, scalability, flexibility and optimization of communications system are the main goals. This model not only implies some guidelines on how the communication among different agents in the system is carried out, but also defines the organization of the elements of the system. The proposed model is applied to a control system of a solar power plant, obtaining an architecture which optimizes agents for the problem. The agents in this system can cooperate and coordinate to achieve a global goal, encapsulate the hardware interfaces and make the control system easily adapt to different requirements through configuration. The model also includes an algorithm that adds new variables in the communication among agents and enables flow control knowledge in the system.

Paper Nr: 375
Title:

TREEAD - A Tool that Enables the Re-use of Experience in Enterprise Architecture Description

Authors:

Paulo Tomé, Luís Amaral and Ernesto Costa

Abstract: Enterprise Architecture (EA) is an important organization issue. The EA, resulting from a development process, is an important tool in different situations. It is used as a communication tool between the systems stakeholders. It is an enabler of changes. More importantly, the EA definition allows the build of unifying or coherent forms or structures. There has been growing interest in this topic topic area in recent years. Several authors have proposed methods, frameworks, languages with the aim of helping organizations in the process of EA definition. This paper proposes a software tool that can be used to re-use experience in EA definition processes. The tool proposed is independent of the framework, method, language or software tool used in the EA description process.

Paper Nr: 402
Title:

SUPPORTING COMPLEXITY IN MODELING BAYESIAN TROUBLESHOOTING

Authors:

Luigi Troiano and Davide De Pasquale

Abstract: Troubleshooting complex systems, such as industrial plants and machinery, is a task entailing an articulated decision making process hard to structure, and generally relying on human experience. Recently probabilistic reasoning, and Bayesian networks in particular, proved to be an effective means to support and drive decisions in Troubleshooting. However, troubleshooting a real system requires to face scalability and feasibility issues, so that the direct employment of Bayesian networks is not feasible. In this paper we report our experience in applying Bayesian approach to industrial case and we propose a methodology to decompose a complex problem in more treatable parts.

Paper Nr: 404
Title:

NEW APPROACHES TO ENTERPRISE COOPERATION GENERATION AND MANAGEMENT

Authors:

Jörg Lässig and Ullrich Trommler

Abstract: The paper considers the problem of task specific cooperation generation and management in a network of small or medium sized enterprises. Such short-term cooperations are called Virtual Enterprices. So far this problem has been discussed by several authors by applying different methods from artificial intelligence as multi-agent systems, ant colony optimization, or genetic algorithms, and combinations of them. In this paper we discuss this problem from a target oriented point of view and focus on the question how it can be modeled to keep its complexity controllable by considering sequential, parallel, and non-combinatorial approaches. After describing the implementation of a cooperation generation solution as rich internet application also solutions for the management of such cooperations considering aspects as replanning are described.

Paper Nr: 411
Title:

RELATIONSHIP BETWEEN LEVY DISTRIBUTION AND TSALLIS DISTRIBUTION

Authors:

Jyhjeng Deng

Abstract: This paper describes the relationship between a stable process, the Levy distribution, and the Tsallis distribution. These two distributions are often confused as different versions of each other, and are commonly used as mutators in evolutionary algorithms. This study shows that they are usually different, but are identical in special cases for both normal and Cauchy distributions. These two distributions can also be related to each other. With proper equations for two different settings (with Levy’s kurtosis parameter α < 0.3490 and otherwise), the two distributions match well, particularly for 1≤α≤2.

Paper Nr: 423
Title:

SUPERVISED LEARNING FOR AGENT POSITIONING BY USING SELF-ORGANIZING MAP

Authors:

Kazuma Moriyasu, Takeshi Yoshikawa and Hidetoshi Nonaka

Abstract: We propose a multi-agent cooperative method that helps each agent to cope with partial observation and reduces the number of teaching data. It learns cooperative actions between agents by using the Self-Organizing Map as supervised learning. Input Vectors of the Self-Organizing Map are the data that reflects the operator’s intention. We show that our proposed method can acquire cooperative actions between agents and reduce the number of teaching data by two evaluation experiments using the pursuit problem that is one of multi-agent system.

Paper Nr: 433
Title:

REPLANTING THE ANSWER GARDEN - Cultivating Expertise through Decision Support Technology

Authors:

Stephen R. Diasio and Núria Agell

Abstract: A growing body of literature established within the information technology field has focused on augmenting organizational knowledge and expertise. Due to increasing environmental complexity and changing technology the exogenous assumptions found within must be readdressed. Expert systems, group decision support systems, and collective intelligence tools are presented to illustrate how expertise needed in organizational decision-making is changing and may not reside within the traditional organizational boundaries. This paper suggests future research streams of how expertise can be cultivated through decision support technologies and how organizational expertise and problem-solving can be augmented reflecting the changing roles of experts and non-experts.

Posters
Paper Nr: 49
Title:

A PERCEPTION MECHANISM FOR TWO-DIMENSIONAL SHAPES IN THE VIRTUAL WORLD

Authors:

Jae-Woo Park and Jong-Hee Park

Abstract: Lifelike agent in the virtual world is an agent who is designed to be able to simulate the realistic human behavior. Agents continuously repeat the process that includes perception, recognition, decision and behavior in the virtual world. Through those processes, the agents store new information in their memory or modify their knowledge if it is needed. This study mainly deals with the perception that is intermediate step between image processing and recognition. In this study, you will see how the agents perceive shapes. And you also will realize how it is possible to infer the part of shape that was partially hidden from the agent’s vision.

Paper Nr: 108
Title:

TOWARD A MODEL OF CUSTOMER EXPERIENCE - An Action Research Study within a Mobile Telecommunications Company

Authors:

Michael Anaman and Mark Lycett

Abstract: Retaining profitable and high-value customers is a major strategic objective for many companies. In mature mobile markets where growth has slowed, the defection of customers from one network to another has intensified and is strongly fuelled by poor customer experience. In this light, this research-in-progress paper describes a strategic approach to the use of Information Technology as a means of improving customer experience. Using action research in a mobile telecommunications operator, a model is developed that evaluates disparate customer data, residing across many systems, and suggests appropriate contextual actions where experience is poor. The model provides value in identifying issues, understanding them in the context of the overall customer experience (over time) and dealing with them appropriately. The novelty of the approach is the synthesis of data analysis with an enhanced understanding of customer experience which is developed implicitly and in real-time.

Paper Nr: 137
Title:

EVOLVING STRUCTURES FOR PREDICTIVE DECISION MAKING IN NEGOTIATIONS

Authors:

Marisa Masvoula, Panagiotis Kanellis and Drakoulis Martakos

Abstract: Predictive decision making increases the individual or joint gain of negotiators, and has been extensively studied. One particular skill of predicting agents is the forecast of their opponents’ future offers. Current systems focus on enhancing learning techniques in the decision making module of negotiating agents, with the purpose to develop more robust systems. Empirical studies are conducted in bounded problem spaces, where data distribution is known or assumed. Our proposal concentrates on the incorporation of learning structures in agents’ decision making, capable of forecasting opponents’ future offers even in open problem spaces, which is the case in most negotiation situations.

Paper Nr: 139
Title:

AUTOMATIC BEHAVIOUR-BASED ANALYSIS AND CLASSIFICATION SYSTEM FOR MALWARE DETECTION

Authors:

Jaime Devesa, Igor Santos, Xabier Cantero, Yoseba K. Penya and Pablo G. Bringas

Abstract: Malware is any kind of program explicitly designed to harm, such as viruses, trojan horses or worms. Since the amount of malware is growing exponentially, it already poses a serious security threat. Therefore, every incoming code must be analysed in order to classify it as malware or benign software. These tests commonly combine static and dynamic analysis techniques in order to extract the major amount of information from distrustful files. Moreover, the increment of the number of attacks hinders manually testing the thousands of suspicious archives that every day reach antivirus laboratories. Against this background, we address here an automatised system for malware behaviour analysis based on emulation and simulation techniques. Hence, creating a secure and reliable sandbox environment allows us to test the suspicious code retrieved without risk. In this way, we can also generate evidences and classify the samples with several machine-learning algorithms. We have developed the proposed solution, testing it with real malware. Finally, we have evaluated it in terms of reliability and time performance, two of the main aspects for such a system to work.

Paper Nr: 187
Title:

CREATING AND DECOMPOSING FUNCTIONS USING FUZZY FUNCTIONS

Authors:

József Dániel Dombi and József Dombi

Abstract: In this paper we will present a new approach for composing and decomposing functions. This technology is based on the Pliant concept, which is a subclass of the fuzzy concept. We will create an effect by using the conjunction of the sigmoid function. After, we will make a proper transformation of an effect in order to define the neutral value. Then, by repeatedly applying this method, we can create effects. Aggregating the effects, we can compose the desired function. This tool is also capable of function decomposition as well, and can be used to solve a variety of real-time problems. Two advantages of our solution are that the parameters of our algorithm have a semantic meaning, and that it is also possible to use the values of the parameter in any kind of learning procedure.

Paper Nr: 219
Title:

AN EXTENSIBLE ENSEMBLE ENVIRONMENT FOR TIME SERIES FORECASTING

Authors:

Claudio Ribeiro, Ronaldo Goldschmidt and Ricardo Choren

Abstract: There have been diverse works demonstrating that ensembles can improve the performance over any individual solution for time series forecasting. This work presents an extensible environment that can be used to create, experiment and analyse ensembles for time series forecasting. Usually, the analyst develops the individual solution and the ensemble algorithms for each experiment. The proposed environment intends to provide a flexible tool for the analyst to include, configure and experiment with individual solutions and to build and execute ensembles. In this paper, we describe the environment, its features and we present a simple experiment on its usage.

Paper Nr: 310
Title:

HYBRID APPROACH FOR INCOHERENCE DETECTION BASED ON NEURO-FUZZY SYSTEMS AND EXPERT KNOWLEDGE

Authors:

Susana Martin-Toral, Gregorio I. Sainz-Palmero and Yannis Dimitriadis

Abstract: The way in which document collections are generated, modified or updated generates problems and mistakes in the information coherency, leading to legal, economic and social problems. To tackle this situation, this paper proposes the development of an intelligent virtual domain expert, based on summarization, matching and neuro-fuzzy systems, able to detect incoherences about concepts, values, or references, in technical documentation. In this scope, an incoherence is seen as the lack of consistency between related documents. Each document is summarized in the form of 4-tuples terms, describing relevant ideas or concepts that must be free of incoherences. These representations are then matched using several well-known algorithms. The final decision about the real existence of an incoherence, and its relevancy, is obtained by training a neuro-fuzzy system with expert knowledge, based on the previous knowledge of the activity area and domain experts. The final system offers a semi-automatic solution for incoherence detection and decision support.

Paper Nr: 311
Title:

MINING FARMERS PROBLEMS IN WEB-BASED TEXUAL DATABASE APPLICATION

Authors:

Said Mabrouk, Mahmoud Rafea, Ahmed Rafea and Samhaa El-Beltagy

Abstract: VERCON (Virtual Extension and Research Communication Network) is an agriculture web-based application, developed to improve communication between agriculture research institutions and extension persons for the benefit of farmers and agrarian business. Farmers' problems component is one of VERCON main components. It is used to receive farmers' problems and provide them with solutions. Over the last five years, problems and their solutions have been accumulated in a textual database. This paper presents an integrated approach for mining these problems and their solutions. The opportunity and potential of mining and extracting information from this resource was identified with several objectives in mind, such as: a) discovering patterns and relations that can be used to enhance the utilization of this valuable resource, b) analyzing solutions given for similar problems, by different experts or by the same expert at different time in terms of their similarities and differences, and c) creating patterns of problems and their solutions that can be used to classify new problems and provide solutions without the need for domain expert.

Paper Nr: 323
Title:

KONSULTANT - A Knowledge Base for Automated Interpretation of Profit Values

Authors:

Bojan Tomić and Tanja Milić

Abstract: Modern reporting systems and business intelligence tools provide various reports for everyday (business) use. Unfortunately, it seems that these reports contain mostly data and little or no information. The consequence is that users need to manually analyze and interpret large quantities of data in order to get information on how the business is doing. A potential solution for this problem is presented in this paper. It is a knowledge base for automated interpretation of annual profit values for enterprises.

Paper Nr: 353
Title:

MODEL - AND SIMULATION DRIVEN SYSTEMS - ENGINEERING FOR CRITICAL INFRASTRUCTURE SECURITY ARCHITECTURES

Authors:

Sascha Goldner and Philip Rech

Abstract: The design of systems that support the protection of critical infrastructures or state borders against various threats is a complex challenge. This requests the use of both modelling and simulation in order to reach the operational goal. Threat scenarios like terrorist attacks, (organized-) crimes or natural disasters involve many different organisations, authorities, and technologies. These systems are often operated only by implicit known processes and diverse operational guidelines based on dissimilar legislations and overlapping responsibilities. In order to cope with these complex infrastructure systems and their interconnected processes, a scenario- and architecture based systems engineering approach must be implemented to design a solution architecture compliant with the requirements, internal and external demands. This paper gives an overview of the developed approach towards the system architecture and explains the different engineering steps in order to implement this architecture for real use-cases.

Paper Nr: 368
Title:

A HYBRIDIZED GENETIC ALGORITHM FOR COST ESTIMATION IN BRIDGE MAINTENANCE SYSTEMS

Authors:

Khaled Shaban, Abdunnaser Younes, Nathan Good, Mohammed Iqbal and Richard Lourenco

Abstract: A hybridized genetic algorithm is proposed to determine a repair schedule for a network of bridges. The schedule aims for the lowest overall cost while maintaining each bridge at satisfactory quality conditions. Appreciation, deterioration, and cost models are employed to model real-life behaviour. To reduce the computational time, pre-processing algorithms are used to determine an initial genome that is closer to the optimal solution rather than a randomly generated genome. A post-processing algorithm that locates a local optimal solution from the output of the genetic algorithm is employed for further reduction of computational costs. Experimental work was carried out to demonstrate the effectiveness of the proposed approach in determining the bridge repair schedule. The addition of a pre-processing algorithm improves the results if the simulation period is constrained. If the simulation is run sufficiently long all pre-processing algorithms converge to the same optimal solution. If a pre-processing algorithm is not implemented, however, the simulation period increases significantly. The cost and deterioration tests also indicate that certain pre-processing algorithms are better suited for larger bridge networks. The local search performed on the genetic algorithm output is always seen as a positive add-on to further improve results.

Paper Nr: 386
Title:

AUTOMATIC SUMMARIZATION OF ARABIC TEXTS BASED ON RST TECHNIQUE

Authors:

Mohamed Hédi Mâaloul, Iskandar Keskes, Lamia Hadrich Belguith and Philippe Blache

Abstract: We present in this paper an automatic summarization technique of Arabic texts, based on RST. We first present a corpus study which enabled us to specify, following empirical observations, a set of relations and rhetorical frames. Then, we present our method to automatically summarize Arabic texts. Finally, we present the architecture of the ARSTResume system. Our method is based on the Rhetorical Structure Theory (Mann, 1988) and uses linguistic knowledge. It relies on three pillars. The first consists in locating the rhetorical relations between the minimal units of the text by applying rhetorical rules. One of these units is the nucleus (the segment necessary to maintain coherence) and the other can be either nucleus or satellite (an optional segment). The second pillar is the representation and the simplification of the RST-tree that represents the source text in hierarchical form. The third pillar is the selection of sentences for the final summary, which takes into account the type of the rhetorical relations chosen for the extract.

Paper Nr: 387
Title:

A MODEL FOR REPRESENTING VAGUE LINGUISTIC TERMS AND FUZZY RULES FOR CLASSIFICATION IN ONTOLOGIES

Authors:

Cristiane A. Yaguinuma, Vinícius R. T. Ferraz, Marilde T. P. Santos, Heloisa A. Camargo and Tatiane M. Nogueira

Abstract: Ontologies have been successfully employed in applications that require semantic information processing. However, traditional ontologies are not able to express fuzzy or vague information, which often occurs in human vocabulary as well as in several application domains. In order to deal with such restriction, concepts of fuzzy set theory should be incorporated into ontologies so that it is possible to represent and reason over fuzzy or vague knowledge. In this context, this paper proposes a model for representing fuzzy ontologies covering fuzzy properties and fuzzy rules, and we also implement fuzzy reasoning methods such as classical and general fuzzy reasoning, aiming to support classification of new instances based on fuzzy rules.

Paper Nr: 432
Title:

DEALING WITH IMBALANCED PROBLEMS - Issues and Best Practices

Authors:

Rodica Potolea and Camelia Lemnaru

Abstract: An imbalanced problem is one in which, in the available data, one class is represented by a smaller number of instances compared to the other classes. The drawbacks induced by the imbalance are analyzed and possible solutions for overcoming these issues are presented. In dealing with imbalanced problems, one should consider a wider context, taking into account the imbalance rate, together with other data-related particularities and the classification algorithms with their associated parameters.

Paper Nr: 448
Title:

USING CREDIT AND DEBIT CARD PURCHASE TRANSACTION DATA FOR RETAIL SALES STATISTICS - Using Point of Sale Data to Measure Consumer Spending: The Case of Moneris Solutions

Authors:

Lorant Szabo

Abstract: This paper presents how Moneris Solutions stores the credit and debit card purchase transactions that it is processing for its merchants, and the methodology that was invented to process this data to produce consumer spending statistics. The transactions are extracted from the production systems at the end of each day and loaded into a warehouse. Aggregations are executed with pre-established frequency, or on an ad-hoc basis, for various merchant samples to calculate sales growth rates in multiple segments. The results are then matched against known events (e.g. Olympic Games) that may have impacted consumer spending during the analysed timeframe. Alternatively, the results may be presented to measure spending growth between any given two time periods in a specific geographical location or industry.

Area 3 - Information Systems Analysis and Specification

Full Papers
Paper Nr: 11
Title:

PROCESS MINING FOR JOB NETS IN INTEGRATED COMPLEX COMPUTER SYSTEMS

Authors:

Shinji Kikuchi, Yasuhide Matsumoto, Motomitsu Adachi and Shingo Moritomo

Abstract: Batch jobs, such as shell scripts, programs and command lines, are used to process large amounts of data in large scale enterprise systems, such as supply chain management (SCM) systems. These batch jobs are connected and cascaded via certain signals or files so as to process various kinds of data in the proper order. Such connected batch jobs are called “job nets”. In many cases, it is difficult to understand the execution order of batch jobs in a job net because of the complexity of their relationships or because of lack of information. However, without understanding the behavior of batch jobs, we cannot achieve reliable system management. In this paper, we propose a method to derive a job net model representing the execution order of the job net from its logs (execution results) by using a process mining technique. Improving on the Heuristic Miner algorithm, we developed an analysis method which takes into account the concurrency of batch job executions in large scale systems. We evaluated our analysis method by a conformance check method using actual job net logs obtained from a large scale SCM system. The results show that our approach can accurately and appropriately estimate the execution order of jobs in a job net.

Paper Nr: 69
Title:

GOAL, SOFT-GOAL AND QUALITY REQUIREMENT

Authors:

Thi-Thuy-Hang Hoang and Manuel Kolp

Abstract: Requirements are input for the process of building software. Depending on the development methodology, they are usually classified into several subclasses of requirements. Traditional approaches distinguish between functional and non-functional requirements and the modern goal-based approaches use hard-goals and soft-goals to describe requirements. While non-functional requirements are known also as quality requirements, neither hard-goals nor soft-goals are equivalent to quality requirements. Due to the abstractness of quality requirements, they are usually described as soft-goals but soft-goals are not necessarily quality requirements. In this paper, we propose a way to clear the problematic ambiguity between soft-goals and quality requirements in goal-based context. We try to reposition the notion of quality requirement in the relations to hard-goals and soft-goals. This allows us to decompose a soft-goal into a set of hard-goals (required functions) and quality requirements (required qualities of function). The immediate applications of this analysis are quality-aware development methodologies for multi-agent systems among which QTropos is an example.

Paper Nr: 79
Title:

A METHOD FOR PORTFOLIO MANAGEMENT AND PRIORITIZATION - An Incremental Funding Method Approach

Authors:

Gustavo Taveira, Antonio Juarez Alencar and Eber Assis Schmitz

Abstract: In today’s very competitive business environment, making the best possible use of limited resources is crucial to achieve success and gain competitive advantage. To accomplish such a goal organizations have to maximize the return provided by their portfolio of future investments, choosing very carefully the IT projects they undertake and the risks they are willing to accept, otherwise they are bound to waste time and money, and still be likely to fail. This article introduces a method that enables managers to better evaluate the investment to be made in a portfolio of IT projects. The method favors the identification of common parts, avoiding the duplication of work efforts, and the selection of the implementation order that yields the highest payoff considering a given risk exposure policy. Moreover, it extends Denne and Cleland-Huang’s ideas on minimum marketable feature modules and uses both Decision Theory and the Principles of Choice to guide the decisions made under uncertainty.

Paper Nr: 84
Title:

A DECISION FRAMEWORK FOR SELECTING A SUITABLE SOFTWARE DEVELOPMENT PROCESS

Authors:

Itamar Sharon, Michel dos Santos Soares, Joseph Barjis, Jan van den Berg and Jos Vrancken

Abstract: For streamlining the activities of software development, a number of software development processes has been proposed in the past few decades. Despite the relative maturity in the field, large companies involved in developing software are still struggling with selecting suitable software processes. This article takes up the challenge of developing a framework that supports decision makers in choosing an appropriate software development process for each individual project. After introducing the problem, the software development processes included in this research are identified. For being able to align software development processes and software projects, a number of project characteristics is next determined. Based on these two analyses, a decision framework is proposed that, given the project characteristics, determines the most appropriate software development process. In a first attempt to validate the framework, it has been applied onto two case studies where the outcomes of the decision framework are compared to those found by means of a collection of experts’ opinions. It was found that the framework and the experts yield similar outcomes.

Paper Nr: 92
Title:

IDENTIFYING RUPTURES IN BUSINESS-IT COMMUNICATION THROUGH BUSINESS MODELS

Authors:

Juliana Jansen Ferreira, Renata Mendes de Araujo and Fernanda Araujo Baião

Abstract: In scenarios where Information Technology (IT) becomes a critical factor for business success, Business-IT communication problems raise difficulties for reaching strategic business goals. Business models are considered as an instrument through which this communication may be held. This work argues that the business model communicability (i.e., the capability of a business model to facilitate Business-IT communication) influences on how Business and IT areas understand each other and on how IT teams identify and negotiate appropriated solutions for business demands. Based on the semiotic theory, this article proposes business model communicability as an important aspect to be evaluated for making Business-IT communication cycle possible, and describes an exploratory study to identify communication ruptures in the evaluation of business models communicability.

Paper Nr: 159
Title:

MANAGING DATA DEPENDENCY CONSTRAINTS THROUGH BUSINESS PROCESSES

Authors:

Joe Y.-C Lin and Shazia Sadiq

Abstract: Business Process Management (BPM) and related tools and systems have generated tremendous advantages for enterprise systems as they provide a clear separation between process, application and data logic. In spite of the abstraction value that BPM provides through explicit articulation of process models, a seamless flow between the data, application and process layers has not been fully realized in mainstream enterprise software, thus often leaving process models disconnected from underlying business semantics captured through data and application logic. The result of this disconnect is disparity (and even conflict) in enforcing various rules and constraints in the different layers. In this paper, we propose to synergise the process and data layers through the introduction of data dependency constraints, that can be modelled at the process level, and enforced at the data level through a (semi) automated translation into DBMS native procedures. The simultaneous and consistent specification ensures that disparity between the process and data logic can be minimized.

Paper Nr: 162
Title:

A CONCEPTUAL FRAMEWORK FOR THE DEVELOPMENT OF APPLICATIONS CENTRED ON CONTEXT AND EVIDENCE-BASED PRACTICE

Authors:

Expedito Carlos Lopes, Ulrich Schiel, Vaninha Vieira and Ana Carolina Salgado

Abstract: Conceptual frameworks are used to present a preferred approach to an idea or thought. Its use considerably facilitates the productivity of the data modelling phase and hence the development of applications, since it preserves portability and usability across domains. Evidence-Based Practice (EBP), usually employed in Medicine, represents a decision-making process centered on justifications of relevant information. EBP is used in several areas; however, we did not found conceptual models involving EBP that preserves portability and usability across domains. Besides, the decision-making context can have an impact on evidence-based decision-making, but the integration of evidence and context is still an open issue. This work presents a conceptual framework that integrates evidence with context applying it to the conceptual modelling phase for EBP domains. The use of context allows filtering out more useful information. The main contributions of this paper are: incorporation of contextual information into EBP procedures and presentation of the proposed conceptual framework. Also an implementation that uses the filtering of contextual information to support evidence-based decision making in the area of crime prevention is presented to validate the framework.

Paper Nr: 203
Title:

A NEW TECHNIQUE FOR IDENTIFICATION OF RELEVANT WEB PAGES IN INFORMATIONAL QUERIES RESULTS

Authors:

Fabio Clarizia, Luca Greco and Paolo Napoletano

Abstract: In this paper we present a new technique for retrieving relevant web pages in informational queries results. The proposed technique, based on a probabilistic model of language, is embedded in a traditional web search engine. The relevance of aWeb page has been obtained through the judgment of human beings which, referring to continue scale, have assigned a degree of importance to each of the analyzed websites. In order to validate the proposed method a comparison with a classic engine is presented showing comparison based on a measure of Precision and Recall and on a measure of distance with respect to the measure of significance obtained by humans.

Paper Nr: 222
Title:

TOWARDS EXTENDING IMS LD WITH SERVICES AND CONTEXT AWARENESS - Application to a Navigation and Fishing Simulator

Authors:

Valérie Monfort, Slimane Hammoudi and Maha Khemaja

Abstract: A few e-Learning platforms propose a solution for ubiquity and context aware adaptability. Current standards, as Learning Design (LD), require an extension to propose context awareness. Based on previous related works, we define a fully interoperable and learner (ambient) context adaptable platform, by using metamodeling based approach mixing MDD, parameterized transformations, and models composition. The scope of this paper is to extend LD metamodel as a first step. We use a concrete software engineering industrial product that was promoted by French Government.

Paper Nr: 273
Title:

A MODEL-DRIVEN APPROACH TO MANAGING AND CUSTOMIZING SOFTWARE PROCESS VARIABILITIES

Authors:

Fellipe Araújo Aleixo, Marília Aranha Freire, Wanderson Câmara dos Santos and Uirá Kulesza

Abstract: This paper presents a model-driven approach to managing and customizing software process variabilites. It promotes the productivity increase through: (i) the process reuse; and (ii) the integration and automation of the definition, customization, deployment and execution activities of software processes. Our approach is founded on the principles and techniques of software product lines and model-driven engineering. In order to evaluate the feasibility of our approach, we have designed and implemented it using existing and available technologies.

Paper Nr: 332
Title:

ENTERPRISE ARCHITECTURE - State of the Art and Challenges

Authors:

Jorge Cordeiro Duarte and Mamede Lima-Marques

Abstract: This paper analyzes current approaches for Enterprise Architecture (EA). Current EA objectives, concepts, frameworks, models, languages, and tools are discussed. The main initiatives and existing works are presented. Strengths and weaknesses of current approaches and tools are discussed, particularly their complexity, cost and low utilization. A EA theoretical framework is provided using a knowledge management approach. Future trends and some research issues are discussed. A research agenda is proposed in order to reduce EA complexity and make it accessible to organizations of any size.

Paper Nr: 366
Title:

COMPOSITIONAL VERIFICATION OF BUSINESS PROCESSES MODELLED WITH BPMN

Authors:

Luis E. Mendoza Morales, Manuel I. Capel Tuñón and María A. Pérez

Abstract: A specific check that is required to be performed as part of the Business Process Modelling (BPM) is on whether the activities and tasks described by Business Processes (BPs) are sound and well–coordinated. In this work we present how the Model–Checking verification technique for software can be integrated within a Formal Compositional Verification Approach (FVCA) to allow the automatic verification of BPs modelled with Business Process Modelling Notation (BPMN). The FVCA is based on a formal specification language with composition constructs. A timed semantics of BPMN defined in terms of the Communicating Sequential Processes + Time (CSP+T) extends untimed BPMN modelling entities with timing constrains in order to detail the behavior of BPs during the execution of real scenarios that they represent. With our proposal we are able to specify and to develop the Business Process Task Model (BPTM) of a target business system. In order to show a practical use of our proposal, a BPTM of an instance of a BPM enterprise–project related to the Customer Relationship Management (CRM) business is presented.

Paper Nr: 376
Title:

STOOG - Style-Sheets-based Toolkit for Graph Visualization

Authors:

Guillaume Artignan and Mountaz Hascoët

Abstract: The information visualization process can be described as a set of transformations applied to raw data to produce interactive graphical representations of information for end-users. A challenge in information visualization is to provide flexible and powerful ways of describing and controlling this process. Most work in this domain address the problem at either application level or toolkit level. Approaches at the toolkit level are devoted to developers while approaches at the application level are devoted to end-users. Our approach build on previous work but goes one step beyond by proposing a unifying view of this process that can be used by both developers and end-users. Our contribution is a system named STOOG with a three-fold contribution: (1) a style-sheets based language for the explicit description of the transformation process at stake in information visualization, (2) an application that automatically builds interactive and graphical visualization of data based on style-sheet descriptions, (3) a user interface devoted to the conception of style sheets. In this paper, we present STOOG basic concepts and mechanisms and provide a case study to illustrate the benefits of using STOOG for the interactive and visual exploration of information.

Short Papers
Paper Nr: 43
Title:

AN ACCESS CONTROL MODEL FOR MASSIVE COLLABORATIVE EDITION

Authors:

Juliana de Melo Bezerra, Celso Massaki Hirata and Edna Maria dos Santos

Abstract: The Web has enabled the elaboration of documents, through editing systems, by a large number of users collaboratively. An access control model is essential to discipline the user access in these systems. We propose an access control model for massive collaborative edition based on RBAC, which takes into account workflow, document structure and organizational structure. Workflow is used to coordinate the collaboration of participants. The document structure allows the parallel edition of parts of the document; and organizational structure allows easier management of users. We designed and implemented an editing system based on the model. We show that the model is useful to specify collaborative editing tools and can be used to categorize and identify collaborative editing systems.

Paper Nr: 56
Title:

INVESTIGATING THE ROLE OF UML IN THE SOFTWARE MODELING AND MAINTENANCE - A Preliminary Industrial Survey

Authors:

Giuseppe Scanniello, Carmine Gravino and Genny Tortora

Abstract: In the paper we present the results of an industrial survey conducted with the Italian software companies that employ a relevant part of the graduate students of the University of Basilicata and of the University of Salerno. The survey mainly investigates the state of the practice regarding the use of UML (Unified Modeling Language) in the software development and maintenance The results reveals that the majority of the companies use UML for modeling software systems (in the analysis and design phases) and for performing maintenance operations. Moreover, maintenance operations are mainly performed by low experienced practitioners.

Paper Nr: 57
Title:

A GENERIC METHOD FOR BEST PRACTICE REFERENCE MODEL APPLICATION

Authors:

Stefanie Looso

Abstract: The perceived importance of the topic IT governance increased in the last decade. Best practice reference models (like ITIL, COBIT, or CMMI) promise support for diverse challenges IT departments are confronted with. Therefore, the interest in best practice reference models grows and more and more companies apply BPRM to support their IT governance. But there is limited knowledge about how BPRM are applied and there is no structured method to support the application and lift the full potential of BPRM. Therefore, this paper presents the construction and evaluation of a generic method for the application of BPRM. Following the language-based approach of method engineering, elements of methods will be derived and formally described. The criteria of design science research presented by Hevner et al., 2004 will be applied to the evaluation of the constructed method. Intention of this research is to reduce the inefficiencies caused by the inconsistent use of best practice reference models.

Paper Nr: 67
Title:

A CONSOLIDATED ENTERPRISE REFERENCE MODEL - Integrating McCarthy’s and Hruby’s Resource-Event-Agent Reference Models

Authors:

Wim Laurier, Maxime Bernaert and Geert Poels

Abstract: This paper introduces a new Resource-Event-Agent (REA) reference model that integrates the transaction and conversion reference models provided by McCarthy, which aimed at designing databases for accounting information systems, and Hruby, which aimed at software development for enterprise information systems, into a single conceptual model that accounts for both inter-enterprise and intra-enterprise processes. This consolidated reference model was developed to support data integration between multiple enterprises and different kinds of enterprise information systems (e.g. ERP, accounting and management information systems). First, the state of the art in REA reference models is addressed, presenting McCarthy’s and Hruby’s reference models and assessing their ability to represent exchanges (e.g. product for money), transfers (e.g. shipment) and transformations (e.g. production process). Second, the new, consolidated REA enterprise reference model is introduced. Third, object model templates are presented, demonstrating that the consolidated REA reference model is able to represent exchanges, transfers and transformations, where McCarthy’s and Hruby’s reference models can each only represent two of these features.

Paper Nr: 68
Title:

SOCIALIZATION OF WORK PRACTICE THROUGH BUSINESS PROCESS ANALYSIS

Authors:

Mukhammad Andri Setiawan and Shazia Sadiq

Abstract: In today’s competitive business era, having the best practice business process is fundamental to the success of an organisation. Best practice reference models are generally created by experts in the domain, but often the best practice can be implicitly derived from the work practices of actual workers within the organization. In this paper, we propose to utilize the experiences and knowledge of previous business process users to inform and improve the current practices, thereby bringing about a socialization of work practice. We have developed a recommendation system to assist users to select the best practices of previous users through an analysis of business process execution logs. Recommendations are generated based on multi criteria analysis applied to the accumulated process data and the proposed approach is capable of extracting meaningful recommendations from large data sets in an efficient way.

Paper Nr: 73
Title:

INTEGRATING AND OPTIMIZING BUSINESS PROCESS EXECUTION IN P2P ENVIRONMENTS

Authors:

Marco Fernandes, Marco Pereira, Joaquim Arnaldo Martins and Joaquim Sousa Pinto

Abstract: Service oriented applications and environments and peer-to-peer networks have become widely researched topics recently. This paper addresses the benefits and issues of integrating both technologies in the scope of business process execution and presents proposals to reduce network traffic and improve its efficiency.

Paper Nr: 97
Title:

AN ESTIMATION PROCEDURE TO DETERMINE THE EFFORT REQUIRED TO MODEL BUSINESS PROCESSES

Authors:

Claudia Cappelli, Flavia Maria Santoro, Vanessa Nunes, Marcio de O. Barros and José Roberto Dutra

Abstract: Business processes modeling projects are increasingly widespread in organizations. Companies have several processes to be identified and modeled. They usually invest much in hiring expert consultants to do such job. However, they still find no guidelines to help them estimate how much a process modeling project will cost or how long this will take. We propose an approach to estimate the effort required to conduct a BPM project and discuss results obtained from over 50 projects in a large Brazilian company.

Paper Nr: 116
Title:

APPLYING AND EVALUATING AN MDA PROCESS MODELING APPROACH

Authors:

Rita Suzana P. Maciel, Bruno C. da Silva and Ana Patrícia F. Magalhães

Abstract: In order to use the MDA approach, several software processes have been defined over recent years. However, there is a need for specifying and maintaining MDA software process definitions systematically which can also support process enactment, reutilization, evolution, management and standardization. Some empirical investigations have been performed concerning the usage of several MDA-related approaches. In this paper we describe our technique for MDA software process specification and enactment, including tool support. We also present case studies and the concluding results on the application of our approach for process modeling.

Paper Nr: 136
Title:

ADAPTIVE EXECUTION OF SOFTWARE SYSTEMS ON PARALLEL MULTICORE ARCHITECTURES

Authors:

Thomas Rauber and Gudula Rünger

Abstract: Software systems are often implemented based on a sequential flow of control. However, new developments in hardware towards explicit parallelism within a single processor chip require a change at software level to participate in the tremendous performance improvements provided by hardware. Parallel programming techniques and efficient parallel execution schemes that assign parallel program parts to cores of the target machine for execution are required. In this article, we propose a design approach for generating parallel software for existing business software systems. The parallel software is structured such that it enables a parallel execution of tasks of the software system based on different execution scenarios. The internal logical structure of the software system is used to create software incarnations in a flexible way. The transformation process is supported by a transformation toolset which preserves correctness and functionality.

Paper Nr: 141
Title:

ONTOLOGY-BASED AUTONOMIC COMPUTING FOR RESOURCE SHARING BETWEEN DATA WAREHOUSES IN DECISION SUPPORT SYSTEMS

Authors:

Vlad Nicolicin-Georgescu, Vincent Benatier, Remi Lehn and Henri Briand

Abstract: Complexity is the biggest challenge in managing information systems today, because of the continuous growth in data and information. As decision experts, we are faced with the problems generated by managing Decision Support Systems, one of which is the efficient allocation of shared resources. In this paper, we propose a solution for improving the allocation of shared resources between groups of data warehouses within a decision support system, with the Service Levels Agreements and Quality of Service as performance objectives. We base our proposal on the notions of autonomic computing, by challenging the traditional way of autonomic systems and by taking into consideration decision support systems’ special characteristics such as usage discontinuity or service level specifications. To this end, we propose the usage of specific heuristics for the autonomic self-improvement and integrate aspects of semantic web and ontology engineering as information source for knowledge base representation, while providing a critical view over the advantages and disadvantages of such a solution.

Paper Nr: 186
Title:

A FEDERATED TRIPLE STORE ARCHITECTURE FOR HEALTHCARE APPLICATIONS

Authors:

Bruno Alves, Michael Schumacher and Fabian Cretton

Abstract: Interoperability has the potential to improve care processes and decrease costs of the healthcare system. The advent of enterprise ICT solutions to replace costly and error-prone paper-based records did not fully convince practitioners, and many still prefer traditional methods for their simplicity and relative security. The Medicoordination project, which integrates several partners in healthcare on a regional scale in French speaking Switzerland, aims at designing eHealth solutions to the problems highlighted by reluctant practitioners. In a derivative project and through a complementary approach to the IHE XDS IT Profile, we designed, implemented and deployed a prototype of a semantic registry/repository for storing medical electronic records. We present herein the design and specification for a generic, interoperable registry/repository based on the technical requirements of the Swiss Health strategy. Although this paper presents an overview of the whole architecture, the focus will be on the registry, a federated semantic RDF store, managing metadata about medical documents. Our goals are the urbanization of information systems through SOA and ensure a level of interoperability between different actors.

Paper Nr: 188
Title:

STRATEGIC REASONING IN SOFTWARE DEVELOPMENT

Authors:

Yves Wautelet, Sodany Kiv, Vi Tran and Manuel Kolp

Abstract: Information systems tend to be huger and of strategic importance in nowadays buisnesses. That is why software engineering is no more only a domain for middle managers and software engieering professionals but top managers require rich models leading to visions on which they can perform strategic analysis for determining their adequacy with long term objectives. The i* approach with its social modeling capabilities as well as service oriented modeling are part of this effort. The strategic services model combines those two approaches and defines a couple of environmental factors (namely threats and opportunities) enabling strategic reasoning on the bases of the enterprise ”high-level” added values. Such a framework offers the adequate agregation level for enabling top managers to take the adequate long term decisions for information systems development. The aim of this paper is to illustrate the strategic services model application as well as a strategic reasoning in the context of the development of a collaborative software application for supply chain management. This case study is from particular interest since it must be adopted by several actors played by cooperating or competing companies that have to figure out the consequences of the adoption of such software.

Paper Nr: 195
Title:

MODELING TIME CONSTRAINTS IN INTER-ORGANIZATIONAL WORKFLOWS

Authors:

Mouna Makni, Nejib Ben Hadj-Alouane, Moez Yeddes and Samir Tata

Abstract: This paper deals with the integration of temporal constraints within the context of Inter-Organizational Workflows (IOWs). Obviously, expressing and satisfying time deadlines is important for modern business processes, and need to be optimized for efficiency and extreme competitiveness. In this paper, we propose a temporal extension to CoopFlow (Tata et al., 2008), an existing approach for designing and modeling IOWs, based on Time Petri Net models and tools. Methods are given, based on reachability analysis and model checking techniques, for verifying whether or not the added temporal requirements are satisfied, while maintaining the core advantage of CoopFlow; i.e. that each partner can keep the critical parts of its business process private.

Paper Nr: 212
Title:

SPECIFICATION AND INSTANTIATION OF DOMAIN SPECIFIC PATTERNS BASED ON UML

Authors:

Saoussen Rekhis Boubaker, Nadia Bouassida and Rafik Bouaziz

Abstract: Domain-specific design patterns provide for architecture reuse of reoccurring design problems in a specific software domain. They capture domain knowledge and design expertise needed for developing applications. Moreover, they accelerate software development since the design of a new application consists in adapting existing patterns, instead of modeling one from the beginning. However, some problems slow their expansion because they have to incorporate flexibility and variability in order to be instantiated for various applications in the domain. This paper proposes new UML notations that better represent the domain-specific design patterns. These notations express variability of patterns to facilitate their comprehension and guide their reuse. The UML extensions are, then, illustrated in the process control system context using an example of an acquisition data pattern.

Paper Nr: 213
Title:

INTENTION DRIVEN SERVICE COMPOSITION WITH PATTERNS

Authors:

Emna Fki, Chantal Soulé Dupuy, Saïd Tazi and Mohamed Jmaiel

Abstract: Service-oriented architecture (SOA) is an emerging approach for building systems based on interacting services. Services need to be discovered and composed in order to meet user needs. Most of the time, these needs correspond to some kind of intentions. Therefore these needs are not expressed in a technical way but in an intentional way. We note that available service descriptions and those of their composition are rather technical. Matching user needs to these services is not a simple issue. This paper defines an intention driven service description model. It introduces a service composition mechanism which provides the intention achievement of a user within a given context. We consider service patterns and we investigate how these patterns can be used in representing reusable generic services and in generating specific composite services.

Paper Nr: 224
Title:

ENGINEERING AGENT-BASED INFORMATION SYSTEMS - A Case Study of Automatic Contract Net Systems

Authors:

Vincent Couturier, Marc-Philippe Huget and David Telisson

Abstract: In every business the tender has become an indispensable part to foster the negotiation of new trade agreements. The selection and the attribution are nowadays a long process conducted manually. It is necessary to define criteria for selecting the best offer, evaluate each proposal and negotiate a business contract. In this paper, we present an approach based on agents for the development of an automatic award of contracts (here called Automatic Contract Net Systems). The selection and negotiation are then automatically performed through communication between agents. We focus in this paper on the tendering and selection of the best offer. To facilitate the development of complex systems such as multi-agent systems, we adopt software patterns that will guide the designer in the analysis, design and implementation on an agent-based execution platform.

Paper Nr: 226
Title:

QUALITY MEASUREMENT MODEL FOR REQUIREMENTS ENGINEERING FLOSS TOOLS

Authors:

María Pérez, Edumilis Méndez, Kenyer Dominguez and Luis E. Mendoza

Abstract: The goal of delivering a suitable or quality software product increases the properly definition of system requirements. Requirements Engineering (RE) is the process of discovering, refining, modeling and specifying software requirements. In addition to the trend of using Free/Libre Open Source Software (FLOSS) tools, we should consider their strengths and weaknesses towards in the light of a suitable RE. This article is aimed at proposing a quality measurement model for RE FLOSS tools and supporting their selection process. Characteristics selected for its evaluation include Functionality, Maintainability and Usability. This model was applied to four FLOSS tool and assessed for completeness, accuracy and relevance to establish which FLOSS tools support RE, either totally or partially, thus making it useful for Small and Medium-sized Enterprises.

Paper Nr: 243
Title:

MODELING ERP BUSINESS PROCESSES USING LAYERED QUEUEING NETWORKS

Authors:

Stephan Gradl, Manuel Mayer, Holger Wittges and Helmut Krcmar

Abstract: This paper presents an approach how to simulate enterprise resource planning systems (ERP) using Layered Queueing Networks (LQN). A case study of an existing production planning process shows how LQN models can be exploited as a performance analysis tool. To gather data about the internal ERP system’s architecture, an internal trace is analyzed and a detailed model is built to evaluate system’s performance and scalability in terms of response times with an increasing number of users and CPUs. It is shown, that the solving results match the characteristics in practice. Depending on the number of CPUs, constant response times are observed up to a certain number of concurrent users.

Paper Nr: 246
Title:

ASSESSING THE INTERFERENCE IN CONCURRENT BUSINESS PROCESSES

Authors:

N. R. T. P. van Beest, N. B. Szirbika and J. C. Wortmann

Abstract: Current Enterprise Information Systems support the business processes of organizations by explicitly or implicitly managing the activities to be performed and storing the data required for employees to do their work. However, concurrent execution of business processes still may yield undesired business outcomes as a result of process interference. As the disruptions are primarily visible to external stakeholders, organizations are often unaware of these cases. In this paper, a method is presented along with an operational tool that enables to identify the potential interference and analyze the severity of the interference resulting from concurrently executed processes. This method is subsequently applied to a case to verify the method, and reinforce the relevance of the problem.

Paper Nr: 250
Title:

MODELING DATA INTEROPERABILITY FOR E-SERVICES

Authors:

Jose C. Delgado

Abstract: In global, distributed systems, services evolve independently and there is no dichotomy between compile and run-time. This has severe consequences. Static data typing cannot be assumed. Data typing by name and reference semantics become meaningless. Garbage collection cannot be used in this context and (references to) services can fail temporarily at one time or another. Classes, inheritance and instantiation also don’t work, because there is no coordinated global compile-time. This paper proposes a service interoperability model based on structural conformance to solve these problems. The basic modeling entity is the resource, which can be described by structure and by behavior (service). We contend that this model encompasses and unifies layers found separate in alternative models, in particular Web Services and RESTful services.

Paper Nr: 256
Title:

CONTEXT-BASED PROCESS LINE

Authors:

Vanessa Nunes, Claudia Werner and Flavia Santoro

Abstract: Complexity and dynamism of day-to-day activities in organizations are inextricably linked, one impacting the other, increasing the challenges for constant adaptation of the way to organize work to address emerging demands. In this scenario, there are a variety of information, insight and reasoning being processed between people and systems, during process execution. We argue that process variations could be decided in real time, using context information collected. This paper presents a proposal for a business process line cycle, with a set of activities encapsulated in the form of components as central artefact. We explain how composition and adaptation of work may occur in real time and discuss a scenario for this proposal.

Paper Nr: 265
Title:

FRAMEWORKS FOR UNDERSTANDING THE CHANGING ROLE OF INFORMATIONS SYSTEMS IN ORGANIZATIONS

Authors:

Jorge Cordeiro Duarte and Mamede Lima-Marques

Abstract: Information Systems (IS) evolve with advances in technology in a silent but radical manner. Initially monolithic and isolated, now they are modular, diversified, integrated and ubiquitous. This ubiquity is not always planned. New applications arise quickly and spontaneously. Current IS applications serve different needs and different audiences, inside and outside the organization, not always in an organized and integrated manner. Technology trends, such as BPM and SOA accentuate the pace of change. The wide range of applications and the pace of innovation brings complexity to the IS planning, developing and integration. This work comprises a study of the evolution and current status of IS in organizations. Its main objective is to provide an integrated theoretical framework helping academy and organizations to understand, plan and conduct efficiently their current information systems efforts.

Paper Nr: 270
Title:

COLLABORATIVE BUSINESS PROCESS ELICITATION THROUGH GROUP STORYTELLING

Authors:

João Carlos de A. R. Gonçalves, Flávia Santoro and Fernanda Baião

Abstract: Business Process Modelling remains a costly and complex task for most organizations. One of the main difficulties lies on the process elicitation phase, where the process analyst attempts to extract information from the process’ participants and other resources involved. This paper describes a case study in which a previously proposed Story Mining method was applied. The Story Mining method and its supporting tool, ProcessTeller, makes use of collaborative storytelling and natural language processing techniques for a semi-automatic extraction of BPMN-compliant business process elements from text.

Paper Nr: 274
Title:

A SPEM BASED SOFTWARE PROCESS IMPROVEMENT META-MODEL

Authors:

Rodrigo Santos de Espindola and Jorge Luis Nicolas Audy

Abstract: Nowadays the organizations are using Software Process Improvement (SPI) reference models as the starting point to their quality improvement initiatives. There is a consensus that by understanding and improving the software process we could achieve the improvement of the software product as well. Several studies also indicate the concurrent adoption of multiples SPI reference models by the organizations. The need for new approaches to integrate those SPI reference models with each other and with the software process developed aiming compliance with them has increased. This paper propose a SPEM based SPI meta-model as a way to support those kinds of integration.

Paper Nr: 278
Title:

ON THE APPLICATION OF AUTONOMIC AND CONTEXT-AWARE COMPUTING TO SUPPORT HOME ENERGY MANAGEMENT

Authors:

Boris Shishkov, Martijn Warnier and Marten van Sinderen

Abstract: Conventional energy sources are becoming scarce and with no (eco-friendly) alternatives deployed at a large scale, it is currently important finding ways to better manage energy consumption. We propose in this paper ICT-related solution directions that concern the energy consumption management within a household. In particular, we consider two underlying objectives, namely: (i) to minimize the energy consumption in households; (ii) to avoid energy consumption peaks for larger residential areas. The proposed solution directions envision a service-oriented approach that is used to integrate ideas from Autonomic Computing and Context-aware Computing: the former influences our considering a selective on/off powering of thermostatically controlled appliances, which allows for energy redistribution over time; the latter influences our using context information to analyze the energy requirements of a household at a particular moment and based on this information, appliances can be powered down. Household-internally, this can help adjusting energy consumption as low as it can be with no violation of the preferences of residents. Area-wise, this can help avoiding energy consumption peaks. It is expected thus that such an approach can contribute to the reduction of home energy consumption in an effective and user-friendly way. Our proposed solution directions are not only introduced and motivated but also partially elaborated through a small illustrative example

Paper Nr: 301
Title:

RISK ANALYSIS FOR INTER-ORGANIZATIONAL CONTROLS

Authors:

Joris Hulstijn and Jaap Gordijn

Abstract: Existing early requirements engineering methods for dealing with governance and control issues do not explicitly support comparison of alternative solutions and have no clear semantics for the notion of a control problem. In this paper we present a risk analysis method for inter-organizational business models, which is based on value modeling. A risk is the likelihood of a negative event multiplied by its impact. In value modeling, the impact of a control problem is given by the missing value. The likelihood can be estimated based on assumptions about trust and about the underlying coordination model. This allows us to model the expected value of a transaction. The approach is illustrated by a comparison of the risks of different electronic commerce scenarios for delivery and payment.

Paper Nr: 325
Title:

PERFORMANCE MANAGEMENT AND CONTROL - A Case Study in Mercedes Benz Cyprus

Authors:

A. I. Kokkinaki and A. Vouldis

Abstract: This paper is a case study that outlines design and implementation issues related to an application that facilitates process management and controls business performance issues in a retailer of extended products. The notion of an extended product is that of a product bundled with services. Towards this aim, the case study described in this paper focuses on three objectives: to review on existing theory on the subject of designing and developing applications and interfaces for enterprise information systems, to solicit end-users’ requirements based on which an information system is designed and developed.

Paper Nr: 326
Title:

AN APPROACH FOR THE DEVELOPMENT OF DOOH-ORIENTED INFORMATION SYSTEMS

Authors:

Pietro D’ambrosio, Filomena Ferrucci, Federica Sarro and Maurizio Tucci

Abstract: The last years are characterised by an increasing demand of using digital services and multimedia content “out of home”. This poses new challenges to software factories in terms of integration and extension systems. In this paper, we report on an industrial research project realized by some ICT companies together with some researchers of the University of Salerno. The goal of the project was to define a new approach for developing Enterprise systems able to integrate traditional applications with Digital Out Of Home (DOOH) extensions. The experience was carried out defining a methodology and some tools to develop such type of systems in an industrial context. The proposed approach was evaluated carrying out two case studies.

Paper Nr: 327
Title:

REQUIREMENTS FOR PERSONAL KNOWLEDGE MANAGEMENT TOOLS

Authors:

Max Völkel and Andreas Abecker

Abstract: Personal knowledge management (PKM) is a crucial element as well as complement of enterprise knowledge management (EKM) which has been largely neglected by Enterprise Information Systems, up to now. This paper collects requirements for a specific class of PKM software, which supports personal note taking and the idea of extending the human memory by information management. It introduces the knowledge-cue life cycle which describes how information artefacts can be used for helping to denote, remember, use, and further develop knowledge embodied in people’s heads. Based on this life cycle and on a literature study, this paper derives a comprehensive requirements catalogue to be fulfilled by knowledge articulation tools used in PKM. This requirements list can be used as a design specification and research agenda for PKM tool builders, and to assess the suitability of existing tools for PKM.

Paper Nr: 355
Title:

THE CONVERGENCE OF WORKFLOWS, BUSINESS RULES AND COMPLEX EVENTS - Defining a Reference Architecture and Approaching Realization Challenges

Authors:

Markus Döhring, Lars Karg, Eicke Godehardt and Birgit Zimmermann

Abstract: For years, research has been devoted to the introduction of flexibility to enterprise information systems. There are corresponding concepts for mainly three established paradigms: workflow management, business rule management and complex event processing. It has however been indicated that the integration of the three paradigms with respect to their meta-models and execution principles yields significant potential for more efficient and flexible enterprise applications and that there is still a lack in conceptual and technical guidance for their integration. The contribution of this work is a loosely coupled architecture integrating all three paradigms. This includes a clear definition of its building blocks together with the main realization challenges. In this context, an approach for assisting modelers in solving the question which paradigm should be used in which way for expressing a particular business aspect is presented.

Paper Nr: 377
Title:

THE SNARE LANGUAGE OVERVIEW

Authors:

Alexandre Barão and Alberto Rodrigues da Silva

Abstract: Social network systems identify existing relations between social entities and provide a set of automatic inferences on these relations, promoting better interactions and collaborations between these entities. However, we find that most of existing organizational information systems do not provide, from scratch, social network features, even though they have to manage somehow social entities. The focus on this paper starts from this fact, and proposes the SNARE Language as the conceptual framework for SNARE system, short for “Social Network Analysis and Reengineering Environment”. The SNARE’s purpose is to promote social network capabilities in information systems not designed originally for the effect. Visual models are needed to infer and represent new or established patterns of relations. This paper overviews the SNARE language and shows its applicability through several models regarding the application of the SNARE to the LinkedIn real scenario.

Paper Nr: 381
Title:

COMBINING SEMANTIC TECHNOLOGIES AND DATA MINING TO ENDOW BSS/OSS SYSTEMS WITH INTELLIGENCE - Particularization to an International Telecom Company Tariff System

Authors:

Javier Martínez Elicegui, Germán Toro del Valle and Marta de Francisco Marcos

Abstract: Businesses need to "reduce costs" and improve their “time-to-market" to compete in a better position. Systems must contribute to these two goals through good designs and technologies that give them agility and flexibility towards change. Semantics and Data Mining are two key pillars to evolve the current legacy systems towards smarter systems that adapt to changes better. In this article we present some solutions to evolve the existing systems, where the end user has the possibility of modifying the functioning of the systems incorporating new business rules in a Knowledge Base.

Paper Nr: 394
Title:

UNDERSTANDING ACCESS CONTROL CHALLENGES IN LOOSELY-COUPLED MULTIDOMAIN ENVIRONMENTS

Authors:

Yue Zhang and James B. D. Joshi

Abstract: Access control to ensure secure interoperation in multidomain environments is a crucial challenge. A multidomain environment can be categorized as tightly-coupled or loosely-coupled. The specific access control challenges in loosely-coupled environments have not been studied adequately in the literature. In this paper, we analyze the access control challenges specific to loosely-coupled environments. Based on our analysis, we propose a decentralized secure interoperation framework for loosely-coupled environments based on Role Based Access Control (RBAC). We believe our work takes the first step towards a more complete secure interoperation solution for loosely-coupled environment.

Paper Nr: 417
Title:

TOWARDS A COMMUNITY FOR INFORMATION SYSTEM DESIGN VALIDATION

Authors:

S. Dupuy-Chessa, D. Rieu and N. Mandran

Abstract: Information systems become ubiquitous. This opens a large spectrum of possibilities for end-users, but the design complexity is increasing. So domain specific languages are proposed sometimes supported by appropriate processes. These proposals are interesting but they are under-validated. Even if validation is a difficult task, which requires specific knowledge, we argue that validation should be systematic. But many problems remain to be considered to achieve this goal: 1) computer scientists are often not trained to evaluation; 2) the domain of information systems design validation and evaluation is still under construction. To cope with the first problem, we propose to capitalize evaluation practices into patterns so that they can be reusable for non-specialists of validation practices. For the second issue, we propose an environment where evaluation specialists and engineering methods specialists can work together to define their common and reusable patterns.

Paper Nr: 424
Title:

TOWARDS A MORE RELATIONSHIP-FRIENDLY ONTOLOGY FOUNDATION FOR CONCEPTUAL MODELLING

Authors:

Roger Tagg

Abstract: Researchers have for some years been looking to the field of Ontology to provide a foundation structure of meaning which would provide a yardstick against which different modelling systems and methodologies can be evaluated. The Bunge-Wand-Weber ontology (BWW) has led the field in this endeavour, but since 2000 has undergone some criticism. A notable feature of BWW is that it does not treat relationships as first-class objects. Several recent proposals have proposed ontologies that do emphasize relationships, although to a somewhat limited extent. Based on previous work on a relationship-oriented ontology, this paper suggests directions in which a Mark 2 BWW could be evolved.

Paper Nr: 430
Title:

MODEL-DRIVEN ENGINEERING OF FUNCTIONAL SECURITY POLICIES

Authors:

Michel Embe Jiague, Marc Frappier, Frédéric Gervais, Pierre Konopacki, Régine Laleau, Jérémy Milhau and Richard St-Denis

Abstract: This paper describes an ongoing project on the specification and automatic implementation of functional security policies. We advocate a clear separation between functional behavior and functional security requirements. We propose a formal language to specify functional security policies. We are developing techniques by which a formal functional security policy can be automatically implemented. Hence, our approach is highly inspired from model-driven engineering. Furthermore, our formal language will enabled us to use model checking techniques to verify that a security policy satisfies desired properties.

Paper Nr: 434
Title:

USER CONTEXT MODELS - A Framework to Ease Software Formal Verifications

Authors:

Amine Raji and Phillipe Dhaussy

Abstract: Several works emphasize the difficulties of software verification applied to embedded systems. In past years, formal verification techniques and tools were widely developed and used by the research community. However, the use of formal verification at industrial scale remains difficult, expensive and requires lot of time. This is due to the size and the complexity of manipulated models, but also, to the important gap between requirement models manipulated by different stackholders and formal models required by existing verification tools. In this paper, we fill this gap by providing the UCM framework to automatically generate formal models used by formal verification tools. At this stage of our work, we generate behavior models of environment actors interacting with the system directly from an extended form of use cases. These behavioral models can be composed directly with the system automata to be verified using existing model checking tools.

Paper Nr: 438
Title:

ANALYSIS OF EFFECTIVE APPROACH FOR BUSINESS PROCESS RE-ENGINEERING - From the Perspective of Organizational Factors

Authors:

Kayo Iizuka, Yasuki Iizuka and Kazuhiko Tsuda

Abstract: This paper presents analysis results of business process re-engineering (BPR) effects including customer satisfaction and their formative factors. Although BPR has been studied for some decades, additional issues have come into existence recently, e.g., balance of efficiency and internal control (including information security management), organization reform or enterprise integration including the causes of recent economic circumstances. Analyses in this paper are aimed at addressing these issues. By clarifying the mechanism for achieving BPR effectiveness, analysis is focused on organization perspectives and communication infrastructure.

Paper Nr: 440
Title:

A PROCESS-DRIVEN METHODOLOGY FOR MODELING SERVICE-ORIENTED COMPLEX INFORMATION SYSTEMS

Authors:

Alfredo Cuzzocrea, Alessandra De Luca and Salvatore Iiritano

Abstract: This paper extends state-of-the-art design methodologies for classical information systems by introducing an innovative methodology for designing service-oriented information systems. Service-oriented information systems can be viewed as information systems adhering to the novel service-oriented paradigm, to which a plethora of novel technologies, such as Web Services, Grid Services and Cloud Computing, currently marry. On the other hand, actual state-of-the-art literature encloses few papers that focus the attention on this yet-interesting research challenge. With the aim of fulfilling this gap, in this paper we provide a process-driven methodology for modeling service-oriented complex information systems, and we prove its effectiveness and reliability on a comprehensive case study represented by a real-life research project.

Paper Nr: 444
Title:

SITUATIONAL METHOD ENGINEERING APPLIED FOR THE ENACTMENT OF DEVELOPMENT PROCESSES - An Agent based Approach

Authors:

Holger Seemueller, Holger Voos, Benjamin Honke and Bernhard Bauer

Abstract: Interdisciplinary product development is faced with the collaboration of diverse roles and a multitude of interrelated artifacts. Traditional and sequential process models cannot deal with the long-lasting and dynamic behavior of the development processes of today. Moreover, development processes have to be tailored to the needs of the projects, which are usually distributed today. Thus, keeping these projects on track from a methodology point of view is difficult. In order to deal with these challenges, this paper will present a novel method engineering and enactment approach. It combines the ideas of workflow technologies and product line engineering for method engineering as well as agent technology for the development process enactment.

Posters
Paper Nr: 12
Title:

GEOPROFILE - UML Profile for Conceptual Modeling of Geographic Databases

Authors:

Gustavo Breder Sampaio, Filipe Ribeiro Nalon and Jugurta Lisboa-Filho

Abstract: After many years of research in the field of conceptual modeling of geographic databases, experts have produced different alternatives of conceptual models. However, still today, there is no consensus on which is the most suitable one for modeling applications of geographic data, which brings up a number of problems for field advancement. A UML Profile allows a structured and precise UML extension, being an excellent solution to standardize domain-specific modeling, as it uses the entire UML infrastructure. This article presents the metamodel of a UML profile developed specifically for conceptual modeling of geographic databases called GeoProfile. This is not a definite proposal; we view this work as the first step towards the unification of the various existing models, aiming primarily at semantic interoperability.

Paper Nr: 28
Title:

A GAP ANALYSIS TOOL FOR SMES TARGETING ISO/IEC 27001 COMPLIANCE

Authors:

Thierry Valdevit and Nicolas Mayer

Abstract: Current trends indicate that information security is critical for today’s enterprises. As managers realise they cannot ignore the potential security risks, they tend to turn to the ISO/IEC 27001 standard, in order to implement an Information Security Management System (ISMS). While being adopted by large companies, ISMS are still considered as out of range by numerous smaller entities. To help SMEs to access to ISO/IEC 27001 certification is still a challenge. In this context, the initial step of an ISMS implementation project is significant: a gap analysis highlighting the current status of the enterprise with regards to the standard, and thus the resources needed to succeed in this project. This paper presents the method and research works performed in order to design, experiment and improve a SME-oriented gap analysis tool for ISO/IEC 27001.

Paper Nr: 53
Title:

EVALUATING UML SEQUENCE MODELS USING THE SPIN MODEL CHECKER

Authors:

Yoshiyuki Shinkawa

Abstract: UML sequence diagram is one of the most important diagrams for behavior modeling, however there are few established criteria, methodologies and processes to evaluate the correctness of the models depicted by this diagram. This paper proposes a formal approach to evaluating the correctness of UML sequence models using the SPIN model checker. In order to deal with the models by the SPIN, they must be expressed in the form of Promela codes and LTL formula. A set of definite rules is presented, which can extract the above codes and formulae from given UML sequence models.

Paper Nr: 74
Title:

WEB SERVICES DEPLOYMENT ON P2P NETWORKS, BASED ON JXTA

Authors:

Marco Pereira, Marco Fernandes, Joaquim Arnaldo Martins and Joaquim Sousa Pinto

Abstract: Digital libraries require several different services. A common way to provide services to Digital Libraries is with the use of Web Services but Web Services are usually based on centralized architectures. In our work we propose that Peer-to-peer networks can be used to relieve the weight of centralized components on the Digital Library infrastructure. One of the challenges of this approach is to develop a method that allows the integration of existing web services into this new reality. In this paper we propose the use of a proxy that allows exposing Web Services that are only accessible trough the peer-to-peer network to the outside world. This proxy also manages interactions with existing external Web Services making them appear as part of the network itself, thus enabling us to reap the benefits of both architectures.

Paper Nr: 78
Title:

MODEL–DRIVEN SYSTEM TESTING OF SERVICE ORIENTED SYSTEMS - A Standard-aligned Approach based on Independent System and Test Models

Authors:

Michael Felderer, Joanna Chimiak-Opoka and Ruth Breu

Abstract: This paper presents a novel standard–aligned approach for model–driven system testing of service oriented systems based on tightly integrated but separated platform–independent system and test models. Our testing methodology is capable for test–driven development and guarantees high quality system and test models by checking consistency resp. coverage. Our test models are executable and can be considered as part of the system definition. We show that our approach is suited to handle important system testing aspects of service oriented systems such as the integration of various service technologies or testing of service level agreements. We also provide full traceability between functional resp. non–functional requirements, the system model, the test model, and the executable services of the system which is crucial for efficient test evaluation. The system model and test model are aligned with existing specifications SoaML and the UML Testing Profile via a mapping of metamodel elements. The concepts are presented on an industrial case study.

Paper Nr: 88
Title:

A GENERIC METRIC FOR MEASURING COMPLEXITY OF MODELS

Authors:

Christian Schalles, John Creagh and Michael Rebstock

Abstract: In recent years, various object and process oriented modelling methods were developed to support the process of modelling in enterprises. When applying these methods, graphical models are generated and used to depict various aspects of enterprise architectures. Concerning this, surveys analyzing modelling languages in different ways were conducted. In many cases these surveys include experimental data collection methods. At this juncture the complexity of concrete models often affects output of these studies. To ensure complexity value comparability of different models, a generic metric for measuring complexity of models is proposed.

Paper Nr: 128
Title:

FROM STRATEGIC TO CONCEPTUAL ENTERPRISE INFORMATION REQUIREMENTS - A Mapping Tool

Authors:

Gianmario Motta and Giovanni Pignatelli

Abstract: Enterprise Information analysis can be modeled on three levels: Logical, Conceptual and Strategic. Logical level is used daily on thousand of projects to design databases. Conceptual level is used by analysts to structure detailed information needs expressed by users. Strategic level is used by IT and user management to define Enterprise Information Architecture and/or to assess the viability of the current information assets. While mapping conceptual onto logical modeling is well-established, strategic and conceptual levels are poorly linked. This drawback very often prevents enterprise to implement a sound information strategy. We here present a method that maps strategic enterprise information into conceptual information modeling. For strategic modeling a comprehensive framework is used that enables to readily identify information domains of a wide range of enterprises. Mapping strategic to conceptual models is performed by a set of simple and predefined rules. The paper also illustrates the tool that has been developed to assist the whole design and mapping process. Finally a case study on materials handling exemplifies our approach.

Paper Nr: 165
Title:

AN EFFICIENT METHOD FOR GAME DEVELOPMENT USING COMPILER

Authors:

Jae Seong Jeong and Soon Ghon Kim

Abstract: As the development of an online game is being more and more extensive, higher manpower become more essential in the development for the game. Especially in programming, it happens that that original scheme that came from the planning department could not be fully developed and expressed in the programmer, depending on the ability of the programmer, coming out with a different result which is less enjoyable than the programmer expected. It is essential to spend much time for checking the error came from planning or programming. Due to solve these kinds of problems, we have developed the complier for only games which uses API for game graphic and API for quest that have been used in development for game. The compiler helps the game planner to find out logical problem directly and manually through manual source coding. Also, through the special game compiler, it would help the game developer to come out with a various kinds of efficient plans. It has the advantages of lowering dependency on the game programmer as well as to lower the cost of production and labor resources.

Paper Nr: 174
Title:

A FRAMEWORK FOR ESTIMATING THE ENVIRONMENTAL COSTS OF THE TECHNOLOGICAL ARCHITECTURE - An Approach to Reducing the Organizational Impact on Environment

Authors:

Jorge Cavaleiro, André Vasconcelos and André Filipe Pedro

Abstract: Green IT was developed from the growing concerns about the rise of the Information Systems energy cost and power consumption along with the need for an environmentally efficient image of the organization. This work addresses and links three main concerns: enterprise architecture modelling, need for energy cost cuts and energy efficiency of Information Technology. It describes a new method to estimate the IT architecture energy costs and CO2 emissions, based on the technology layer of the enterprise architecture, and some solutions for solving or, at least, reducing the environmental impact of the latter.

Paper Nr: 180
Title:

ON ENTERPRISE INFORMATION SYSTEMS ADDRESSING THE DEMOGRAPHIC CHANGE

Authors:

Silvia Schacht and Alexander Mädche

Abstract: The demographic change influences almost all area of life such as health care, education, social systems and productivity of companies. Therefore, appropriate adjustments to an aging society are necessary. To date, many studies have been carried out weighting the advantages and disadvantages of the various alternatives. The impact of an older workforce on productivity when using enterprise information systems was considered only sparsely. The following paper presents our research intention to design and develop EIS referring to the demographic change.

Paper Nr: 194
Title:

THE JANUS FACE OF LEAN ENTERPRISE INFORMATION SYSTEMS - An Analysis of Signs, Systems and Practices

Authors:

Coen Suurmond

Abstract: The term “Enterprise Information System” can be used in two ways: (1) to denote the comprehensive structure of information flows in an Enterprise, or (2) to denote the computer system supporting the business processes. In this paper I will argue that we always should take the first meaning as our starting point, and that in analysing the use of information and information flows we should recognise the different kinds of sign systems that are appropriate to different kinds of use of information in business processes. In system development we should carefully analyse the way a specific company is operating, and design a comprehensive information system according to the principles of Lean Thinking and according to the conversational maxims of Grice.

Paper Nr: 209
Title:

VALIDATION OF A MEASUREMENT FRAMEWORK OF BUSINESS PROCESS AND SOFTWARE SYSTEM ALIGNMENT

Authors:

Lerina Aversano, Carmine Grasso and Maria Tortorella

Abstract: The alignment degree existing between a business process and the supporting software systems expresses how the software systems support the business process. This measure can be used for indicating business requirements that the software systems do not implement. Methods are needed for detecting the alignment level existing between software systems and business processes and identifying the software changes to be performed for increasing and keeping an adequate alignment level. This paper proposes a framework including a set of metrics codifying the alignment concept with the aim of measuring it, detecting misalignment, identifying and performing software evolution changes. The framework is, then, validated through a case study.

Paper Nr: 233
Title:

THE IEEE STANDARDS COMMITTEE P1788 FOR INTERVAL ARITHMETIC REQUIRES AN EXACT DOT PRODUCT

Authors:

Ulrich Kulisch

Abstract: Computing with guarantees is based on two arithmetical features. One is fixed (double) precision interval arithmetic. The other one is dynamic precision interval arithmetic, here also called long interval arithmetic. The basic tool to achieve high speed dynamic precision arithmetic for real and interval data is an exact multiply and accumulate operation and with it an exact dot product. Actually the simplest and fastest way for computing a dot product is to compute it exactly. Pipelining allows to compute it at the same high speed as vector operations on conventional vector processors. Long interval arithmetic fully benefits from such high speed. Exactitude brings very high accuracy, and thereby stability into computation. This document is intended to provide some background information, to increase the awareness, and to informally specify the implementation of an exact dot product.

Paper Nr: 253
Title:

COMPONENT-ORIENTED MODEL-BASED WEB SERVICE ORCHESTRATION

Authors:

Suela Berisha, Jacques Hamalian and Béatrice Rumpler

Abstract: The Web service orchestration enables cross departmental coordination of business processes in a heterogeneous environment. We have designed a component-oriented web service orchestration based on service functionality in an Enterprise Service Bus (ESB). Our orchestration model is centralized-based, where a single service is the central coordinator containing the composition logic of the other services involved. These services are modelled using Service Component Architecture (SCA) specification, defined in our UML profile described in this paper. SCA conforms to SOA (Service Oriented Architecture) principles using CBSE (Component-Based Software Engineering) techniques. Finally, our specified orchestration model is implemented using Model-Driven Architecture Approach (MDA).

Paper Nr: 254
Title:

REACT-MDD - Reactive Traceability in Model-driven Development

Authors:

Marco Costa and Alberto Rodrigues da Silva

Abstract: The development of information systems has evolved to a complex task, regarding a multitude of programming and modelling paradigms, notations and technologies. Tools like integrated development environments (IDE), computer aided systems engineering (CASE) and relational database systems (RDBMS), among others, evolved to a reasonable state and are used do generate different types of artefacts needed in this context. ReacT-MDD is a traceability between artefacts open model that was instantiated in a prototype. We present and discuss some practical issues of ReacT-MDD in the context of reactive traceability, which is also described.

Paper Nr: 258
Title:

EXTENDING DEMO - CONTROL ORGANIZATION MODEL - Modeling an Organization's Viability Norms, Dysfunctions and Resilience Strategies

Authors:

David Aveiro, A. Rito Silva and José Tribolet

Abstract: In this paper we present part of an extension to the Design and Engineering Methodology for Organizations (DEMO) – a proposal for an ontological model for the generic Control Organization that we argue that exists in every organization. With our proposal, DEMO can now be used to explicitly specify critical properties of an organization – that we call measures – whose value must respect certain restrictions imposed by other properties of the organization – that we call viability norms. We can now also precisely specify, with DEMO, defined resilience strategies that control and eliminate dysfunctions – violations of viability norms.

Paper Nr: 268
Title:

TOWARDS A PROCESS TO INFORMATION SYSTEM DEVELOPMENT WITH DISTRIBUTED TEAMS

Authors:

Gislaine Camila Lapasini Leal, César Alberto da Silva, Elisa Hatsue Moriya Huzita and Tania Fatima Calvi Tait

Abstract: The software engineering area offers support to manage the information systems development process. Software engineering processes have artifacts, roles and activities well defined, allowing adjustments in specific situations. However, these processes are generally oriented to development of coallocated projects. They are not taken into account peculiarities in regard to coordination, control and communication, when the development is with distributed teams. The purpose of this paper is to contribute to the Software Engineering area presenting a comparative analysis of some development processes from the perspective of development with distributed teams. Additionally, it points to the need to process definition that includes the peculiarities of this approach to software development.

Paper Nr: 306
Title:

A LEAN ENTERPRISE ARCHITECTURE FOR BUSINESS PROCESS RE-ENGINEERING AND RE-MARKETING

Authors:

Clare L. Comm and Dennis F. X. Mathaisel

Abstract: An agile enterprise is an organization that rapidly adapts to market and environmental changes in the most productive and cost-effective ways. To be agile, an enterprise should utilize the key principles of enterprise architecting enabled by the most recent developments in information and communication technologies and lean principles and. This paper describes a Lean Enterprise Architecture (LEA) to organize the activities for the transformation of the enterprise to agility. It is the application of systems architecting methods to design, develop, produce, construct, integrate, validate, and implement a lean enterprise using information engineering and systems engineering methods and practices. LEA is a comprehensive framework used to align an organization's information technology assets, people, operations, and projects with its operational characteristics. The architecture defines how information and technology support the business operations and provide benefit for its stakeholders.The architecting process incorporates lean attributes and values as design requirements in creating the enterprise. The application of the LEA is less resource intensive and disruptive to the organization than the traditional lean enterprise transformation methods and practices. Thus, it is essential that the merits of this process are re-marketed (communicated) to the stakeholders to encourage its acceptance.

Paper Nr: 340
Title:

COLOURED PETRI NETS WITH PARALLEL COMPOSITION TO SEPARATE CONCERNS

Authors:

Ella Roubtsova and Ashley McNeile

Abstract: We define a modeling language based on combining Coloured Petri Nets with Protocol Modeling semantics. This language combines the expressive power of Coloured Petri Nets in describing behavior with the ability provided by Protocol Modeling to compose partial behavioral descriptions. The resultant language can be considered as a domain specific Coloured Petri Net based language for deterministic and constantly evolving systems.

Paper Nr: 343
Title:

EMPLOYEES’ UPSKILLING THROUGH SERVICE-ORIENTED LEARNING ENVIRONMENTS

Authors:

Boris Shishkov, Marten van Sinderen and Roberto Cavalcante

Abstract: Aiming to increase their competitiveness, many companies are turning to new learning concepts and strategies that stress collaboration among employees. Both individual learning and organizational learning should be considered in relation to create a synergetic effect when introducing and maintaining skills among employees, and as well when it is necessary to train personnel in non-core competences. Regarding such kind of training, the possibility to collaborate with third parties is advantageous, especially in co-creating and/or using learning content. In this paper, we propose ICT-related solution directions concerning an adaptable, flexible, and collaborative upskilling of employees. We consider, in particular, two underlying objectives, namely adjustability of content+process and collaborative content co-creation. The mentioned adjustability of content+process is about the specialization of generic content+process, driven by the user (employee), and it is also about individualism. The mentioned collaborative content co-creation is about a (cross-border) dynamic creation of content by several persons. Our proposed solution directions (introduced and motivated in this paper) envision an approach that is used to integrate ideas pointing to some computing paradigms that concern SOA – Service-Oriented Architecture. It is expected that such an approach can contribute to the upskilling of employees that is done in an effective and user-friendly way.

Paper Nr: 370
Title:

COMPATIBILITY VERIFICATION OF COMPONENTS IN TERMS OF FUNCTIONAL AND EXTRA-FUNCTIONAL PROPERTIES - Tool Support

Authors:

Kamil Ježek and Přemek Brada

Abstract: Component-based programming, as a technology increasing development speed and decreasing cost of the final product, promises a noticeable improvement in a process of development of large enterprise applications. Even though component-based programming is a promising technology it still has not reached its maturity. The main problem addressed in this paper are compatibility checks of components in terms of functional and extra-functional properties and their insufficient tool support. This paper summarizes a mechanism of component compatibility checks and introduces a tool whose aim is to fill this gap mainly with respect to the phase of testing the assembly of components. The introduced mechanism and the tool allow to check component bindings before deployment into the target environment. It displays a component graph, details of components and highlights incompatibility problems. Hence, the tool validates the presented mechanism and provides useful support for developers when deciding which component to use.

Paper Nr: 390
Title:

ONTOLOGICAL CONFIGURATOR - A Novel Approach

Authors:

F. Clarizia, F. Colace and M. de Santo

Abstract: The ability to create customized product configurations which satisfies a user’ needs is one of the major aims desired by companies. In particular, the design and implementation of web-based services that allows users to create and customize in a simple and intuitive way the desired product. One research area that has lacked progress is the definition of a common vocabulary that enables consumer-to-manufacturer and manufacturer-to-manufacturer communication. Enabling this communication opens possibilities such as the ability to express customer requirements correctly and to exchange knowledge between manufacturers. A popular approach to express knowledge uses ontological formalisms. Using ontology, a vocabulary can be defined that allows interested parties to specify and share common knowledge and serves to define a framework for the representation of the knowledge. With this aim, this paper presents an ontology-based configurator system that finds the configuration that maximizes the user’s needs starting from the desired requirements, the available components, the context information and previous similar configurations. The process by which the system finds the candidate configuration follows an approach known as “Slow Intelligence”. This paper presents in detail the proposed approach and presents its first application.

Paper Nr: 403
Title:

HEIDEGGER AND PEIRCE - Learning with the Giants - Critical Insights for IS Design

Authors:

Ângela Lacerda Nobre

Abstract: Martin Heidegger’s ontology and Charles Sanders Peirce semiotics offer a vastly unexplored potential in terms of IS design and development. Though there are several authors who have explored these giants’ works, such contributions have seldom been disseminated and applied within concrete organisations, in particular in terms of contributing to organisational IS design. Within the current context of post-industrial society there is an urgent need to further develop the insights from these scholars. The links between formal and informal processes, between tacit and explicit knowledge and between diachronic and synchronic analysis are critical for the understanding of today’s competitiveness. And Heidegger’s and Peirce’s works are crucial for a better grasp and optimisation of current complexity at organisational level.

Paper Nr: 407
Title:

CATAPHORIC LEXICALISATION

Authors:

Robert Michael Foster

Abstract: A traditional lexicalization analysis of a word looks backwards in time, describing each change in the word’s form and usage patterns from actuation (creation) to the present day. I suggest that this traditional view of lexicalization can be labelled Anaphoric Lexicalization to reflect its focus upon what has passed. A corresponding forward-looking process can then be envisaged called Cataphoric Lexicalization. Applying Cataphoric Lexicalization to an existing phrase from a sub-language generates a series of possible lexemes that might represent the target phrase in the future. The method is illustrated using a domain specific example. The conclusion suggests that rigorous application of Cataphoric Lexicalization by a community would in time result in a naturally evolved controlled lexicon.

Paper Nr: 420
Title:

COMPLEX NETWORKS AND COMPLEX SYSTEMS FOR SUSTAINABILITY - A Nature-based Framework for Sustainable Product Design and Development

Authors:

Joe A. Bradley and Roberto G. Aldunate

Abstract: Nature-based systems are efficiently designed and are able to respond to many system requirements such as scalability, adaptability, self-organization, resilience, robustness, durability, reliability, self-monitoring, self-repair, and many others. Using nature’s examples as guidepost, there is a unique platform for developing more environmentally sustainable products and systems. This position paper makes a case for an interdisciplinary approach to sustainable design and development. This paper suggests that the design should not only mimic natural behaviours but should benefit from natural phenomenon (e.g, wind turbines). The paper proposes a conceptual modeling system framework, whereby physical products and systems are designed and modeled with the added benefit of how similar systems work in nature. Developing such a system in nontrivial and requires an interdisciplinary approach. To realize this system will require a merging of analytical and computational models of nature systems and human-made systems into a single information system. In this position paper, we discuss the framework at a birds-eye view.

Area 4 - Software Agents and Internet Computing

Full Papers
Paper Nr: 19
Title:

ASPECT-MONITOR - An Aspect-based Approach to WS-contract Monitoring

Authors:

Mario Freitas da Silva, Itana Maria de Souza Gimenes, Marcelo Fantinato, Maria Beatriz Felgar de Toledo and Alessandro Fabricio Garcia

Abstract: Contract monitoring is carried out to ensure the Quality of Services (QoS) attributes and levels specified in an electronic contract throughout a business process enactment. This paper proposes an approach to improve QoS monitoring based on the aspect-oriented paradigm. Monitoring concerns are encapsulated into aspects to be executed when specific process points are reached. Differently from other approaches, the proposed solution requires no instrumentation, uses Web services standards, and provides an integrated infrastructure for dealing with contract establishment and monitoring. Moreover, a Business Process Management Execution Environment is designed to automatically support the interaction between customer, provider and monitor organizations.

Paper Nr: 86
Title:

LOCATING AND EXTRACTING PRODUCT SPECIFICATIONS FROM PRODUCER WEBSITES

Authors:

Maximilian Walther, Ludwig Hähne, Daniel Schuster and Alexander Schill

Abstract: Gathering product specifications from the Web is labor-intensive and still requires much manual work to retrieve and integrate the information in enterprise information systems or online shops. This work aims at significantly easing this task by introducing algorithms for automatically retrieving and extracting product information from producers’ websites while only being supplied with the product’s and the producer’s name. Compared to previous work in the field, it is the first approach to automate the whole process of locating the product page and extracting the specifications while supporting different page templates per producer. An evaluation within a federated consumer information system proves the suitability of the developed algorithms. They may easily be applied to comparable product information systems as well to minimize the effort of finding up-to-date product specifications.

Paper Nr: 113
Title:

QUERYABLE SEPA MESSAGE COMPRESSION BY XML SCHEMA SUBTRACTION

Authors:

Stefan Böttcher, Rita Hartel and Christian Messinger

Abstract: In order to standardize the electronic payments within and between the member states of the European Union, SEPA (Single Euro Payments Area) – an XML based standard format – was introduced. As the financial institutes have to store and process huge amounts of SEPA data each day, the verbose structure of XML leads to a bottleneck. In this paper, we propose a compressed format for SEPA data that removes that data from a SEPA document that is already defined by the given SEPA schema. The compressed format allows all operations that have to be performed on SEPA data to be executed on the compressed data directly, i.e., without prior decompression. Even more, the queries being used in our evaluation can be processed on compressed SEPA data with a speed that is comparable to ADSL2+, the fastest ADSL standard. In addition, our tests show that the compressed format reduces the data size to 11% of the original SEPA messages on average, i.e., it compresses SEPA data 3 times stronger than other compressors like gzip, bzip2 or XMill – although these compressors do not allow the direct query processing of the compressed data.

Paper Nr: 255
Title:

REPUTATION-BASED SELECTION OF WEB INFORMATION SOURCES

Authors:

Donato Barbagallo, Cinzia Cappiello, Chiara Francalanci and Maristella Matera

Abstract: The paper compares Google’s ranking with the ranking obtained by means of a multi-dimensional source reputation index. The data quality literature defines reputation as a dimension of information quality that measures the trustworthiness and importance of an information source. Reputation is recognized as a multi-dimensional quality attribute. The variables that affect the overall reputation of an information source are related to the institutional clout of the source, to the relevance of the source in a given context, and to the general quality of the source’s information content. We have defined a set of variables measuring the reputation of Web information sources along these dimensions. These variables have been empirically assessed for the top 20 sources identified by Google as a response to 100 queries in the tourism domain. Then, we have compared Google’s ranking and the ranking obtained along each reputation variable for all queries. Results show that the assessment of reputation represents a tangible aid to the selection of information sources.

Paper Nr: 312
Title:

TOWARDS AUTOMATED SIMULATION OF MULTI AGENT BASED SYSTEMS

Authors:

Ante Vilenica and Winfried Lamersdorf

Abstract: The simulation of systems offers a viable approach to find the optimal configuration of a system with respect to time, costs or any other utility function. In order to speed up the development process and to relieve developers from doing cumbersome work that is related to the execution of simulation runs, i.e. doing the simulation management manually, it is desirable to have a framework that provides tools which perform the simulation management automatically. Therefore, this work addresses this issue and presents an approach that reduces the effort to manage simulations, i.e. it eases and automates the execution, observation, optimization and evaluation. The approach consists of a declarative simulation description language and a framework that is capable of automatically managing simulations. Thereby, this approach reduces the costs, i.e. with respect to time and money, to perform simulations. Furthermore, this work targets a special subset of simulations, i.e. multi agent based simulation, that has grown large attention in many areas of science as well as in commercial applications in the last decade. The applicability of the approach is proven by a case study called ”mining ore resources” that has been conducted.

Paper Nr: 398
Title:

AN APPROACH FOR WEB SERVICE DISCOVERY BASED ON COLLABORATIVE STRUCTURED TAGGING

Authors:

Uddam Chukmol, Aïcha-Nabila Benharkat and Youssef Amghar

Abstract: This research work presents a folksonomic annotation used in a collaborative tagging system to enhance the Web service discovery process. More expressive than traditional tags, this structural tagging method is able to reflect the functional capability of Web services, easy to use and very much accessible to users than ontology or logic based formalism of annotation. We describe a Web service retrieval system exploiting the aforementioned structural tagging. System user profiling is also approached in order to further assign reputation and compute tag recommendation. We present some interesting usage scenarios of this system and also a strategy to evaluate its performance.

Short Papers
Paper Nr: 20
Title:

THE IMPACT OF COMPETENCES ASSESSMENT SYSTEMS TO FIRM PERFORMANCE - Study on Project Management e-Assessment

Authors:

Maria-Iuliana Dascalu, Camelia Delcea, Dragos Palaghita and Bogdan Vintila

Abstract: Current paper studies the relationship between the use of e-assessment in the field of project management and firms’ performance. For this purpose, a case study was especially designed to explore whether the extensive use of an e-assessment application influences the financial and non-financial indicators of firms’ performance. The participants in the study were the users of the e-assessment application, employees working in Romanian firms, from IT, education and consulting business sector. The main findings of the study were the positive correlation between the considered performance indicators and companies’ growth. The study is presented in the framework of learning at work and highlights the importance of using information systems in enterprise environments, to develop professional competences and, thus, to achieve business excellence.

Paper Nr: 21
Title:

THE BEAUTY OF SIMPLICITY - Ubiquitous Microblogging in the Enterpise

Authors:

Peter Gluchowski and Martin Böhringer

Abstract: Microblogging has become a primary trend topic in the World Wide Web. Compared with the history of blogs, wikis and social networking services, its adoption in commercial enterprises seems to be the next logical step. However, in the case of enterprise microblogging there is not only the possibility of copying the public web’s functionality. In enterprise scenarios the mechanism of microblogging could have much more use cases than its public pendants, i.e., ‘tweeting’ processes, machines and software. We systematically develop a scenario for ubiquitous microblogging, which means a microblogging space including human and non-human information sources in an organisation. After presenting a conceptual description we discuss examples for the approach. Based on a comprehensive study of existing literature we finally present a detailed research agenda towards ubiquitous microblogging for enterprise information management.

Paper Nr: 64
Title:

ADAPTING MULTIPLE-CHOICE ITEM-WRITING GUIDELINES TO AN INDUSTRIAL CONTEXT

Authors:

Robert Michael Foster

Abstract: This paper proposes a guideline for writing MCQ items for our domain which promotes the use of Multiple Alternative Choice (MAC) items. This guideline is derived from one of the guidelines from the Taxonomy of item-writing guidelines reviewed by Haladyna et al, 2002. The new guideline is tested by delivering two sets of MCQ test items to a representative sample of candidates from the domain. One set of items complies with the proposed guideline and the other set of items does not. Evaluation of the relative effectiveness of the items in the experiment is achieved using established methods of item response analysis. The experiment shows that the new guideline is more applicable to the featured domain than the original guideline.

Paper Nr: 133
Title:

IMPROVING MOODLE WITH WIRIS AND M-QIT

Authors:

Ángel Mora, Enrique Mérida and Domingo López

Abstract: Moodle is one of the most extended LMS. Moodle allows the collaborative learning where students and teachers collaborate on the daily work. But problems with Moodle arise in our scientific context. Although Moodle can preview LaTeX code, not all the possibilities of LaTeX are available and it is very good for scientific teachers but hard for students, so, the mathematical formulas are usually replaced by non-standard versions of them written in ASCII. Another problem with Moodle is the difficulty of reusing quizzes. We present two tools that improve Moodle in three aspects: representation of math formulas, mathematical computation, and improving the learning units for the students. WIRIS is a powerful editor and allows interacting by using mathematic formulas in an easy way and developing new learning units with mathematical computation via web; M-QIT is a new tool able to manage and reutilize quizzes and questions available in previous courses in Moodle.

Paper Nr: 158
Title:

USING FUZZY SET APPROACH IN MULTI-ATTRIBUTE AUTOMATED AUCTIONS

Authors:

Madhu Goyal and Saroj Kaushik

Abstract: This paper designs a novel fuzzy attributes and competition based bidding strategy (FAC-Bid), in which the final best bid is calculated on the basis of the assessment of multiple attributes of the goods and the competition for the goods in the market. The assessment of attributes adapts the fuzzy sets technique to handle uncertainty of the bidding process. The bidding strategy also uses and determines competition in the market (based on the two factors i.e. no. of the bidders participating and the total time elapsed for an auction) using Mamdani’s Direct Method. Then the final price of the best bid will be determined based on the assessed attributes and the competition in the market using fuzzy reasoning technique.

Paper Nr: 164
Title:

ABSTRACTION FROM COLLABORATION BETWEEN AGENTS USING ASYNCHRONOUS MESSAGE-PASSING

Authors:

Bent Bruun Kristensen

Abstract: Collaboration between agents using asynchronous message-passing is typically described in centric form distributed among the agents. An alternative associative form also by means of message-passing is shared between agents: This abstraction from collaboration is a descriptive unit and makes description of collaboration between agents simple and natural.

Paper Nr: 182
Title:

SERVICE ACQUISITION AND VALIDATION IN A DISTRIBUTED SERVICE DISCOVERY SYSTEM CONSISTING OF DOMAIN- SPECIFIC SUB-SYSTEMS

Authors:

Deniz Canturk and Pinar Senkul

Abstract: With the increase in the number of advertised web services, finding the web services appropriate for a given request will result in problems in terms of performance, efficiency and quality of the discovered services. In this paper, we introduce a distributed web service discovery system which consists of domain-specific sub-systems. We establish ontology based domain specific web service discovery sub-systems in order to discover web services on private sites and in business registries, and in order to improve service discovery in terms of efficiency and effectiveness. An important problem with effectiveness is to check whether a discovered service is active or not. Many services in the registries are not active any more. Distributing the discovery to domain-specific subsystems makes it possible to lower the service discovery duration and to provide almost up-to-date service status. In this work, we describe the service acquisition and validation steps of the system in more detail and we present experimental results on real services acquired on the web.

Paper Nr: 249
Title:

B2C AND C2C E-MARKETPLACES - A Multi-layer/Multi-agent Architecture to Support them

Authors:

R. Miguel, J. J. Castro-Schez, D. Vallejo, C. Glez-Morcillo and V. Herrera

Abstract: Due to the growing of the Internet and the users’ trend to acquire products via Internet, the e-commerce has become in a new very important business opportunity. In this paper we present a multi-agent and multi-layer architecture to support general purpose e-marketplaces. The presented system is compound of software agents that can be able to act in the users’ benefit reducing their search time and facilitating the product acquisition. These agents are grouped in different layers giving support to B2C and C2C e-marketplaces. The interaction between the different agents is explained through the paper although we remarks the agents used to search, acquire and recommend products. We have also used the fuzzy logic in these agents because we believe that is very useful in order to facilitate its use and reduce the search and acquisition time.

Paper Nr: 289
Title:

MOBILE E-LEARNING - Support Services Case Study

Authors:

Catarina Maximiano and Vítor Basto Fernandes

Abstract: Currently mobile devices and wireless communications are present in the daily tasks of our lives. m-Learning extends the e-Learning concept by the use of mobile computation and communication technological resources. Mobile computing focuses the paradigm of "anytime, anywhere access" that offers resources for distance education via mobile devices. This paradigm, allow that information is made available to users with greater flexibility and diversity, supporting learning in non conventional places and time schedules. The need for learning throughout life and flexibility of education profiles requires the support and development of new approaches in the educational context and tools to support learning. This paper presents a distance learning case study at Polytechnic Institute of Leiria. The main objective is the utilization of mobile devices as support tools for course information/contents resources access available in Learning Management Systems (in the presented case study - Moodle).

Paper Nr: 324
Title:

APPLYING FIPA STANDARDS ONTOLOGICAL SUPPORT TO INTENTIONAL-MAS-ORIENTED UBIQUITOUS SYSTEM

Authors:

Milene Serrano and Carlos José Pereira de Lucena

Abstract: In this paper, we present the development of an Intentional-MAS-Oriented Ubiquitous System driven by the FIPA Standards Ontological Support. This support contemplates the development with a certain degree of commonality. Our main goal is to improve the Intentional Systematic Software Development for further Ubiquitous Systems, by considering the same language, vocabulary, and protocols in the agents' communication and inter-operability as well as an adequate context-aware knowledge representation for different smart-spaces.

Paper Nr: 413
Title:

SELECTING PARTNERS FOR COLLABORATIVE NETWORKS - Mixed Methods Approach

Authors:

Noor Azliza Che Mat, Yen Cheung and Helana Scheepers

Abstract: Due to an increasingly uncertainty in the business environment, there is a need for organisations to collaborate in order to compete. However, due to time limitation in selecting partners for collaboration particularly new partners, there is a need to identify critical factors in finding the right partners. To explore the criteria for partner selection, mixed research methods approach was employed by conducting an online survey followed by a case study approach. The online survey was conducted with eighty-nine organisations that have experience in collaborative projects. ANOVA tests were performed on the survey data followed by an exploratory analysis. The major findings showed that out of sixteen partners selection criteria, only seven were critically important in selecting partners. These were then divided into two dimensions: dependability and experience. Later, the case study research methodology was carried out and conducted as further analysis of the online survey findings.

Posters
Paper Nr: 42
Title:

EXTENDING LEGACY AGENT KNOWLEDGE BASE SYSTEMS WITH SEMANTIC WEB COMPATIBILITIES

Authors:

Po-Chun Chen, Guruprasad Airy, Prasenjit Mitra and John Yen

Abstract: Knowledge bases with inference capabilities play a significant role in an intelligent agent system. Towards the vision of the semantic Web, the compatibility of knowledge representation is critically important. However, a legacy system that was developed without this consideration would have compatibility gaps between its own knowledge representation and the semantic Web standards. In order to solve this problem, we present a systematical approach to extend a legacy agent knowledge base to be able to handle and reason information encoded in standard semantic Web languages. The algorithms presented in this paper are applicable to compatible rule-based systems, and the methodology can be applied to other knowledge systems.

Paper Nr: 61
Title:

L4F: A P2P ITS FOR RECOMMENDING ADDITIONAL LEARNING CONTENTS BY MEANS OF FOLKSONOMIES

Authors:

Marta Rey-López, Fernando A. Mikic-Fonte, Ana M. Peleteiro, Juan C. Burguillo and Ana Belén Barragáns-Martínez

Abstract: Intelligent Tutoring Systems (ITSs) aim at providing personalized and adaptive tutoring to students by the incorporation of a student modeling component. Besides,Web 2.0 technologies have achieved great acceptance, bringing new applications which empower multiuser collaboration, such as collaborative tagging systems. In this paper, we present L4F, a peer-to-peer system which provides students with new additional learning elements related to the course they are following. This is achieved through the collaboration of several ITSs and using folksonomies.

Paper Nr: 163
Title:

ANTECEDENCE GRAPH APPROACH TO CHECKPOINTING FOR FAULT TOLERANCE IN MULTI AGENT SYSTEM

Authors:

Rajwinder Singh, Ramandeep Kaur and Rama Krishna Challa

Abstract: Checkpointing has been widely used for providing fault tolerance in multi-agent systems. But the traditional message passing based checkpointing and rollback algorithms may suffer from problems of excess bandwidth consumption and large overheads. In order to maintain consistency of multi agent system, the checkpointing is forced on all participating agents that may result in blocking of agents’ operations to carry out checkpointing. These overheads could be considerably reduced if the checkpointing would be forced only on selective agents instead of all agents. This paper presents a low latency, non-blocking checkpointing scheme which marks out dependent agents using Antecedence graphs and then checkpoints are forced on only these agents. To recover from failures, the antecedence graphs and message logs are regenerated and normal operations continued. The proposed scheme reports less overheads and reduced recovery times as compared to existing schemes.

Paper Nr: 166
Title:

A FAULT DETECTION AND RECOVERY SYSTEM FOR DOORAE RUNNING ON HOME NETWORK ENVIRONMENT

Authors:

SoonGohn Kim and Eung Nam Ko

Abstract: We propose FDRS (Fault Detection and Recovery System) for DOORAE running on home network environment). DOORAE (Distance Object Oriented collaboRAtion Environment) is a framework of supporting development on multimedia applications for computer-based collaborative works running on home network. FDRS is a system which is suitable for detecting and recovering software error based on distributed multimedia education environment by using software techniques. The purpose of FDRS is to recover application software running on DooRae automatically and repeatedly. All application software running on DooRae returns to a healthy state or at least an acceptable state.

Paper Nr: 175
Title:

A CONTRACT-BASED EVENT DRIVEN MODEL FOR COLLABORATIVE SECURITY IN FINANCIAL INFORMATION SYSTEMS

Authors:

Roberto Baldoni, Georgio Lodi, Gregory Chockler, Eliezer Dekel, Barry P. Mulcahy and Giuseppe Martufi

Abstract: This paper introduces a new collaboration abstraction, called em Semantic Room (SR) , specifically targeted to facilitating sharing and processing large volumes of data produced and consumed in real time by a collection of networked participants. The model enables constructing flexible collaborative event-driven distributed systems with well-defined and contractually regulated properties and behavior. The contract determines the set of services provided by SR, the software and hardware resources required for its operation along with a collection of non-functional requirements, such as, data protection, isolation, trust, security, availability, fault-tolerance, and performance. We show how the SR model can be leveraged for creating trusted information processing systems for the sake of protecting financial institutions against coordinated security threats (e.g., stealthy scans, worm outbreaks, Distributed Denial of Service). To this end, we present several use-cases demonstrating a variety of the SR administration task flows, and briefly discuss possible ways of implementing the SR abstraction using the collaborative intrusion detection as an example.

Paper Nr: 372
Title:

CASE STUDY: COMPLEMENTS OF MATHEMATICS AND E-LEARNING

Authors:

A. Hernández Encinas, A. Queiruga Dios, J. Martín Vaquero and J. L. Hernández Pastora

Abstract: In this study we present the experience of the work done during this course with students of the subject of Complements of Mathematics. We used the computer in class every day and we have proposed to students daily activities and a final paper related to any of the items on the subject. The use of eLearning platform along with other tools is an improvement in the education system. This study has been developed specifically with students from the last engineering courses and shows how Mathematics could help engineer students to change their traditional point of view related to university studies and learning tools.

Paper Nr: 416
Title:

TheHiddenU - A Social Nexus for Privacy-assured Personalisation Brokerage

Authors:

G. Kappel, J. Schönböck, M. Wimmer, G. Kotsis, A. Kusel, B. Pröll, W. Retschitzegger, W. Schwinger, R. R. Wagner and S. Lechner

Abstract: Social networks have seen enormous growth over the past few years, providing also a powerful new channel for distributing personalized services. Personalization, however, is exacerbated because social content is scattered across different social networks serving specific human needs and social networkers are particularly reluctant to share social content with service providers, if not under their full control. This paper sketches TheHiddenU, a social nexus exploiting semantic techniques for integrating, profiling and privatising social content, thereby providing the technical prerequisites for personalized brokerage, a new, sustainable business model in the Social Web.

Paper Nr: 425
Title:

WHAT, WHY AND HOW TO INTEGRATE - Self Organization within a SOA

Authors:

Hakima Mellah, Salima Hassas and Habiba Drias

Abstract: The main point of this work is to show how to contribute to information system (IS) agility by enabling self-organization of distributed IS that represent the nodes of information network relating these IS. The contribution is focused on presenting some factors that lead and trigger for self organization in a Service oriented Architecture (SOA) and propose how to integrate Self Organizing (SO) mechanism as this latter has already been proposed(in another work) for a Multi Agent System (MAS).

Area 5 - Human-Computer Interaction

Full Papers
Paper Nr: 23
Title:

A MODEL FOR IMPROVING ENTERPRISE’S PERFORMANCE BASED ON COLLABORATIVE E-LEARNING

Authors:

Camelia Delcea, Maria Dascălu and Cristian Ciurea

Abstract: Collaborative learning and e-learning are believed to be very important to the success of enterprises. Some qualitative and quantitative variables that characterises collaborative learning through e-learning are depicted. The incidence degree between them and enterprise’s performance was determined through a model. Also, the difference between enterprise’s financial and non-financial performance was underlined. The proposed model is constructed using the facilities offered by grey systems and ϕ-fuzzy sub-set theory. For better understand the model, we used it on two branches of the same bank. We conclude our paper by presenting and comparing the obtained results and by giving some future work guidelines.

Paper Nr: 25
Title:

GEO-SPADE - A Generic Google Earth-based Framework for Analyzing and Exploring Spatio-temporal Data

Authors:

Slava Kisilevich, Daniel Keim and Lior Rokach

Abstract: Various frameworks are being developed to support geospatial data processing and exploration. However, these frameworks are insufficient due to their inability to support rapid development and reuse of existing geospatial operations. To cope with the difficulties that arise, more and more Web-based solutions are being implemented incorporating Web services and Open Geospatial Consortium (OGC)a standards for data interchange and distributed geo-processing. Although these solutions often target specific problems, they rarely can be extended by a third party. As a result, analysts are frequently required to implement additional software in order to complete their exploration process. In this paper, we present a unified, extensible framework that can solve generic spatio-temporal analysis tasks. Specifically, we present the prototype of a new framework that allows quick hypothesis definition and testing. The proposed framework, which we term GEO-SPADE, uses Google Earth as a primary visualization platform and data interchange system. Pluggable components can easily be integrated into the framework. We demonstrate the feasibility of the GEO-SPADE framework by following the scenario of analysis of travel sequences near Funchal, Madeira Island.

Paper Nr: 34
Title:

BIOSTORIES - Dynamic Multimedia Interfaces based on Automatic Real-time User Emotion Assessment

Authors:

Vasco Vinhas, Eugénio Oliveira and Luís Paulo Reis

Abstract: BioStories is the outcome of a three and a half years research project focused in uniting affective and ubiquitous computing with context aware multimedia content generation and distribution. Its initial premise was based in the possibility of performing real-time automatic emotion assessment trough online biometric channels monitoring and use this information to design on-the-fly dynamic multimedia storylines emotionally adapted, so that end users would unconsciously be choosing the story flow. The emotion assessment process was based on dynamic fusion of biometric channels such as EEG, GSR, respiration rate and volume, skin temperature and heart rate on top of Russell’s circumplex model of affect. BioStories’ broad scope also allowed for some spin-off projects namely mouse control through EMG that resulted in a patented technology for alternative/inclusive interfaces. Exhaustive experiments showed 87% of success rate for emotion assessment in a dynamic tridimensional virtual environment with an immersiveness score of 4.2 out of 5. The success of the proposed approach allows the vision of its appliance in several domains such as virtual entertainment, videogames and cinema as well as direct marketing, digital TV and domotic appliances.

Paper Nr: 117
Title:

A FRAMEWORK FOR FLEXIBILITY AT THE INTERFACE - Joining Ajax Technology and Semiotics

Authors:

Frederico José Fortuna, Rodrigo Bonacin and Maria Cecilia Calani Baranauskas

Abstract: Different users have different needs and, especially in the web context, the user interfaces should deal with these different needs. In this paper we propose a framework to support the design and development of flexible, adjustable interfaces; the framework is grounded in the semiotic concept of norms and is developed using Ajax technology. The framework was tested within the Vilanarede system, an inclusive social network system developed in the context of e-Cidadania project. The test helped us to identify the advantages and drawbacks of the proposed solution and the users provided us with important feedback to improve it.

Paper Nr: 132
Title:

EVALUATION OF AN ANTHROPOMORPHIC USER INTERFACE IN A TELEPHONE BIDDING CONTEXT AND AFFORDANCES

Authors:

Pietro Murano and Patrik O'Brian Holt

Abstract: Various researchers around the world have been investigating the use of anthropomorphism in some form at the user interface. Some key issues involve determining whether such a medium is effective and liked by users. Discovering conclusive evidence in relation to this medium would help user interface designers produce better user interfaces. Currently conclusive evidence is lacking. However the authors of this paper have been investigating these issues for some time and this paper presents the results of an experiment which tested anthropomorphic voices against neutral voices. This was in the context of telephone bidding. The results indicated the impersonal human voice condition to foster higher bids, while the user satisfaction was inconclusive across all four conditions tested. The results have also been examined in terms of the Theory of Affordances with the aim of finding an explanation for the observed results. The results indicate that issues of affordances played a part in the results observed.

Paper Nr: 135
Title:

STANDARDS FOR COMMUNICATION AND E-LEARNING IN VIRTUALWORLDS - The Multilingual-assisted Chat Interface

Authors:

Samuel Cruz-Lara, Tarik Osswald, Jordan Guinaud, Nadia Bellalem and Lotfi Bellalem

Abstract: Many of today’s applications embed textual chat interfaces or work with multilingual textual information. The Multilingual Information Framework (MLIF) [ISO DIS 24616] is being designed in order to fulfill the multilingual needs of today’s applications. Within our research activity for the MLIF standard, we developed the Multilingual-Assisted Chat Interface, which intends to help people communicate in virtual worlds with others who do not speak the same language and to offer new possibilities for learning foreign languages. By developing this application, we also wanted to show the advantages of using web services for externalizing computation: we used the same web service for two virtual worlds: Second Life and Solipsis. In this paper, we first propose a short analysis of social interactions and language learning in virtual worlds. Then, we describe in a technical way the features, architecture and development indications for the Multilingual-Assisted Chat Interface.

Paper Nr: 152
Title:

NEW PERSPECTIVES FOR SEARCH IN SOCIAL NETWORKS - A Challenge for Inclusion

Authors:

Júlio Cesar dos Reis, Rodrigo Bonacin and M. Cecilia C. Baranauskas

Abstract: The world is populated with many scenarios characterized by a diversity of cultures and social problems. Thus it is necessary to investigate computational solutions that respect this diversity. The use of search engines is one of the main mechanisms to provide the access to information generated in the Social Network Services (SNS). These mechanisms are currently built through lexical-syntactical processing resulting in barriers for many users to access correct and valuable information in the Web. Novel search mechanisms could effectively help people to recover and use information through Inclusive Social Networks Services (ISN), promoting the universal access to information. This paper shows results of search activities in an ISN that point out how to improve search engines considering aspects related to social and digital inclusion. Inspired in these results, we outline an approach based on Organisational Semiotics to build Web ontology, which is used by an inclusive search engine drawn up in this paper. Actually, this proposal combines different strategies to provide better search results for all.

Paper Nr: 251
Title:

A STUDY IN AUTHENTICATION VIA ELECTRONIC PERSONAL HISTORY QUESTIONS

Authors:

Ann Nosseir and Sotirios Terzis

Abstract: Authentication via electronic personal history questions is a novel technique that aims to enhance question-based authentication. This paper presents a study that is part of a wider investigation into the feasibility of the technique. The study used academic personal web site data as a source of personal history information, and studied the effect of using an image-based representation of questions about personal history events. It followed a methodology that assessed the impact on both genuine users and attackers, and provides a deeper insight into their behaviour. From an authentication point of view, the study concluded that (a) an image-based representation of questions is certainly beneficial; (b) a small increase in the number of distracters/options used in closed questions has a positive effect; and (c) despite the closeness of the attackers their ability to answer correctly with high confidence questions about the genuine users’ personal history is limited. These results are encouraging for the feasibility of the technique.

Paper Nr: 318
Title:

CONTEXT-AWARE SEARCH ARCHITECTURE

Authors:

Hadas Weinberger, Oleg Guzikov and Keren Raby

Abstract: There are several reasons for developing a context-aware search interface. In so far, search engines considered the technology perspective – suggesting structural, statistical, syntactical and semantic measures. What is yet missing in Web search processes is the inclusion of the user model. The prevailing situation is a usability hurdle. While there is a wealth of information about search engines, what is yet lacking is a recommender system. Such as could be provided by a set of adequate principles and techniques, as basis for the design of a Web-base interface guiding users towards efficient and effective utilization of the spectrum of search engines available on the Web. The research reported here takes a step towards this goal, suggesting context-aware search architecture (namely, CASA) aiming towards: 1) the analysis of query elements, 2) guiding the process of query modification, and 3) recommending the personalized use of search engines. A use case illustrates the need for the suggested framework and a prototype Web interface is introduced. We discuss preliminary findings from empirical research conducted with several classes of students in two distinct academic institutes in two different countries, which concerns the feasibility and usefulness of the suggested framework. We conclude with recommendations for further research.

Paper Nr: 360
Title:

A USER-INTERFACE ENVIRONMENT AS A SUPPORT IN MATHS TEACHING FOR DEAF CHILDREN

Authors:

Maici Duarte Leite, Laura Sánchez García, Andrey R. Pimentel, Marcos S. Sunye, Marcos A. Castilho, Luis C. Bona and Fabiano Silva

Abstract: The use of theories stemming from different areas of knowledge has contributed both to knowledge acquisition – since these theories help perceive individuals more effectively – and to the development of educational software. This is the case, for instance, of cognitive theories, which in turn assist in terms of both learning support and the clarification of possible needs in the acquisition of a given concept. In the present paper, we propose a discussion about the help function (Leite, Borba and Gomes, 2008) provided by educational software, focusing deaf students. This paper also presents an enhancement for the interface design of the help function, based on the Conceptual Fields Theory and the Theory of Multiple External Representations. This interface design enhancement is created by carrying out a new analysis regarding the display of a help function through diagrams. In short, in the present paper we derive important contributions from these two theoretical frameworks so as to provide deaf students with support in the acquisition of mathematical concepts, considering that these deaf students are put in an inclusive context.

Paper Nr: 373
Title:

VIRTUAL AND COLLABORATIVE ENVIRONMENT FOR LEARNING MATHS

Authors:

A. Queiruga Dios, A. Hernández Encinas, I. Visus Ruiz and A. Martín del Rey

Abstract: This paper presents the experience of interaction between students and the new teaching-learning system, using eLearning platforms to support and improve classroom teaching. The proposed system tries to achieve a new common space between some European countries, that will allows the adaptation to a comparable and compatible common system of higher education. This study has been developed specifically with students from the first engineering courses and will show an online platform that helps them to interact with other classmates and acquire knowledge using the computer.

Paper Nr: 388
Title:

A TOOL FOR AUTOMATIC ADAPTATION OF WEB PAGES TO DIFFERENT SCREEN SIZE

Authors:

Roberto Armenise, Cosimo Birtolo and Luigi Troiano

Abstract: Digital contents are often designed having desktop applications as target in mind. As MobileWeb is becoming a common means to access internet services, there is a need for adapting content to smaller displays of mobile devices. Adaptation of web pages should be in accordance with aesthetics and usability requirements. In this paper we propose a tool, based on genetic algorithm, able to assist designers in delivering content adapted to mobile devices. In particular the tool, starting from a given page, searches for alternative layouts in order to best fit the content to reduced target screen size. The result is a set of adapted web pages. Experimental results show this approach is feasible and can compete with design made by humans.

Paper Nr: 396
Title:

GlobeOLAP - Improving the Geospatial Realism in Multidimensional Analysis Environment

Authors:

Vinícius Ramos Toledo Ferraz and Marilde Terezinha Prado Santos

Abstract: The combination of Geographic Data Warehouses (GDW) and Spatial OLAP (SOLAP) provide efficient browsing and storing of very large spatio-temporal databases. However, some advanced techniques of geovisualization are not well supported in many of these tools. Three-dimensional Thematic Mapping through Virtual Globes is one of them, and it can provide a friendly but powerful mechanism for summarize visually huge amounts of geospatial-analytical data and their changes over time. This paper shows the GlobeOLAP, a prototype of a Web-based SOLAP tool that allows the customized generation of many types of Thematic Maps to be visualized three-dimensionally in a Virtual Globe (Google Earth), together with the traditional tabular view. This tool can improve management decisions by allowing managers to identify patterns and build knowledge in a spatial realistic environment.

Short Papers
Paper Nr: 8
Title:

USABILITY EVALUATION FRAMEWORK FOR E-COMMERCE WEBSITES

Authors:

Layla Hasan, Anne Morris and Steve Probets

Abstract: The importance of evaluating the usability of e-commerce websites is well recognised and several studies have evaluated the usability of e-commerce websites using either user- or evaluator-based usability evaluation methods. No research, however, has employed a software-based method in the evaluation of such sites. Furthermore, the studies which employed user testing and/or heuristic evaluation methods in the evaluation of the usability of e-commerce websites did not offer detail about the benefits and drawbacks of these methods with respect to the identification of specific types of usability problem. This research has developed a methodological framework for the usability evaluation of e-commerce websites which involves employing user testing and heuristic evaluation methods together with Google Analytics software. The framework was developed by comparing the benefits and drawbacks of these methods in terms of the specific areas of usability problems that they could or could not identify on e-commerce websites.

Paper Nr: 10
Title:

CONSUMER PRIVACY BEING RAIDED AND INVADED - The Negative Side of Mobile Advertising

Authors:

Monika Mital

Abstract: There have been growing concerns regarding mobile advertising being extremely intrusive into the personal space of the consumers. The study tries to broadly concretize the reasons as to why mobile ads are found to be intrusive. The analysis reported that three factors namely: situation characteristics, message characteristics and device/network characteristics, played an important role in defining the extent of intrusiveness of mobile advertising and the ad irritation arising out of it.

Paper Nr: 24
Title:

THE ACCEPTANCE OF WIRELESS HEALTHCARE FOR INDIVIDUALS - An Integrative View

Authors:

Ing-Long Wu, Jhao-Yin Li, Chu-Ying Fu and Shwu-Ming Wu

Abstract: A recent report showed that the adoption rate of mobile healthcare is relatively low. Thus, a study for how healthcare professionals adopt mobile services to support their work is imperative in practice. An integration of TAM and TPB has considered both technological and organizational aspects in a complementary manner. However, while mobile healthcare is considered as an emerging technology with wireless features and often used in a voluntary motive. The service provision for pervasive usage and individual psychological state are critical in determining the system use. Accordingly, perceived service availability and personal innovativeness in IT are the major drivers for the components of TAM and TPB. This study thus proposed such a research framework for integrating these relevant components from a broader perspective. Empirical research is further conducted for examining its practical validity.

Paper Nr: 66
Title:

ON THE IMPORTANCE OF VISUALIZING IN PROGRAMMING EDUCATION

Authors:

Peter Bellström and Claes Thorén

Abstract: In this paper we address the importance of visualizing in programming education. In doing so, we describe three contributions to the research field. First we describe an initial study on visualizing the Bubble Sort algorithm. The Bubble Sort algorithm has been chosen since it contains several parts that in the past have been troublesome for several students taking introductory programming courses. Secondly, we describe a design for how visualization can be inserted into programming education. In that design we again use the Bubble Sort algorithm as an illustrating example. Thirdly, we present a classification of four visual programming environments: Alice, BlueJ, Greenfoot and Scratch. In the classification we have positioned each visual programming environment in a matrix comprised of the granularity dimension and the visualization dimension. All three presented contributions to the research field of visualization should contribute to an understanding of abstract programming concepts starting with problem or application instead of syntax. Students lacking scientific mathematics and students taking an introductory programming course based on e-Learning should benefit the most of the presented contributions.

Paper Nr: 114
Title:

ENCOURAGING A CULTURE CHANGE IN TASK MANAGEMENT WITHIN PIM TOOLS

Authors:

Roger Tagg, Leonard Koh and Tamara Beames

Abstract: Personal Information Management (PIM) software tools are becoming increasingly important in easing the high information workloads of today’s knowledge workers. However despite the efforts of both commercial vendors and research teams, significant improvements have yet to find their way into mainstream commercial tools and common usage. This paper focusses on two particular aspects: improved support for task management and better user interface metaphors. We also address the issue of how current knowledge work culture affects the way in which PIM tools are utilised, and of adoption of these tools.

Paper Nr: 131
Title:

EFFECTIVENESS AND PREFERENCES OF ANTHROPOMORPHIC FEEDBACK IN A STATISTICS CONTEXT

Authors:

Pietro Murano and Nooralisa Mohd Tuah

Abstract: This paper describes an experiment and its results concerning research that has been going on for a number of years in the area of anthropomorphic user interface feedback. The main aims of the research have been to examine the effectiveness and user satisfaction of anthropomorphic feedback. The results are useful to all user interface designers wishing to improve the usability of their products. There is still disagreement in the research community concerning the usefulness of anthropomorphism at the user interface. This research is contributing knowledge to the aim of discovering whether such approaches are effective and liked by users. The experiment described in this paper, concerns the context of statistics tutoring/revision, which is part of the domain of software for in-depth learning. Anthropomorphic feedback was compared against an equivalent non-anthropomorphic feedback. The results indicated the anthropomorphic feedback to be preferred by users for some factors. However the evidence for effectiveness was inconclusive.

Paper Nr: 189
Title:

A FRAMEWORK—INFORMED DISCUSSION ON SOCIAL SOFTWARE - Why Some Social Software Fail and Others do Not?

Authors:

Roberto Pereira, M. Cecilia C. Baranauskas and Sergio Roberto P. da Silva

Abstract: The possibility of developing more interactive and innovative applications led to an explosion in the amount of systems available on the web in which users interact with each other and have a primary role as producers of content—the so-called social software. Despite the popularity of such systems, few of them keep an effective participation of its users, promoting a continuous and productive interaction. This paper aims at starting a discussion about the factors that contribute for the success of certain systems in keeping their users attention while others fail. To achieve this goal, we present a discussion informed by a conceptual framework. To situate the discussion in a practical context, we illustrate with an analysis of a collaborative system for usability evaluation on the web.

Paper Nr: 190
Title:

USING TASK AND DATA MODELS FOR USER INTERFACE DECLARATIVE GENERATION

Authors:

Vi Tran, Manuel Kolp, Jean Vanderdonckt and Yves Wautelet

Abstract: User interfaces for data systems has been a technical and human interaction research question since a long time and today these user interfaces require dynamic automation and run-time generation to properly deal with on a large-scale. This paper proposes a framework, i.e., a methodological process, a meta-model and a computer software to drive the automatic database user interface design and code behind generation from both the task model and data model combined together. This includes both the user interface and the sound and complete data update, definition and manipulation.

Paper Nr: 237
Title:

NEW ERA OF M-LEARNING TOOLS - Creation of MPrinceTool a Mobile Educative Tool

Authors:

Habib M. Fardoun, Pedro G. Villanueva, Juan Enrique Garrido, Gabriel Sebastian Rivera and Elena de la Gíua

Abstract: M-learning is an exciting art of using mobile technologies to enhance learning skills. Mobile phones, PDAs, Pocket PCs and the Internet can be joined together in order to engage and motivate learners, anytime anywhere. The society is entering a new era of m-Learning, which makes important to analyze and innovate the current educational tools. This article proposes a new tool that aims to improve the deficiencies identified in the usual analysis. The proposed tool, is called MPrinceTool, provides a new means to interact via mobile technology by using web services that facilitate users to participate in educational activities and communication with working groups, synchronously: where students can participate in class and communicate with the teacher and the other students, also they can realize exams and exercises which needs the teacher presence; or asynchronously: Which allow students to feel free in realizing there educational activities out of school time. These advantages will be clearly adapted by our tool and part of it will be explained in the “use case” proposed in the paper. Also we present a brief explanation of the used methodology and technology, in developing this tool by presenting its system architecture, deployment and class models.

Paper Nr: 244
Title:

THE VISUALISATION AND EXPLORATION OF A CURRICULUM KNOWLEDGEBASE

Authors:

Sebastian Richards and Hilary Dexter

Abstract: This paper discusses the difficulties associated with the visualisation and navigation of complex, multi-faceted knowledgebases. The issues are analysed through a case study of the Manchester Medical School curriculum, with the intention of developing an innovative visualisation tool to enhance the experience of navigation through such large sets of data. The approach taken to provide a solution involves the use of a metaphorical representation of the data schema, designed to reduce visual complexity and increase human usability. User feedback uncovered the need to create a ‘trail’ of related concepts through the knowledgebase, and provision for such navigation techniques is present in the developed tool. The concept of domain-specific structural customisation was investigated and led to the suggestion that modification of a schema towards a more logical and human-friendly abstraction can improve a user’s understanding and engagement with data.

Paper Nr: 292
Title:

AGILE DEVELOPMENT OF INTERACTIVE INSTALLATIONS - Two Case Studies

Authors:

Pedro Campos

Abstract: We argue that agile methods can be particularly effective when designing and developing interactive instal-lations, as long as the agile methods are correctly tailored to this application domain. Based on significant experience, which was built upon ethnographic observation and participation in about a dozen industrial projects related to interactive installations’ design and development, we present agile strategies which proved effective when dealing with the industry’s typical tight production schedules, and we also provide the data from two case studies, discussions and conclusions. Using real world case studies such as these, researchers can obtain more insight into best practices that could be useful for promoting innovation during the agile process.

Paper Nr: 321
Title:

A GRAPHICAL WORKBENCH FOR KNOWLEDGE WORKERS

Authors:

Heiko Haller, Andreas Abecker and Max Völkel

Abstract: We present iMapping, a novel approach for visually structuring information objects on the desktop. iMapping is developed on top of semantic desktop technologies and particularly supports personal knowledge management of knowledge workers. iMapping has been designed to combine the advantages of the three most important visual mapping approaches, Mind-Mapping, Concept Maps and Spatial Hypertext. We describe the design and prototypical implementation of iMapping—which is fundamentally based on deep zooming and nesting. iMapping bridges the gap between unstructured content (like informal text notes) and semantic models by allowing to easily create on-the-fly annotations with the whole range of content links, from vague associations to formal relationships. Our first experimental evaluations indicate a favorable user experience and functionality, compared with state-of-the-art, commercial Mind-Mapping software.

Paper Nr: 354
Title:

CULTURE INFLUENCE ON HUMAN COMPUTER INTERACTION - Cultural Factors Toward User’s Preference on Groupware Application Design

Authors:

Rein Suadamara, Stefan Werner and Axel Hunger

Abstract: This paper reports about on-going research on how cultural dimensions affect user’s preference in intercultural collaboration using computer supported cooperative work (CSCW) tools. It proposes how selected cultural dimensions should be applied when designing a synchronous groupware application aimed for multicultural users. Using four cultural dimensions, which are Collectivist-Individualist, Power Distance Index, Uncertainty Avoidance, and Low- and High Context communication from Hofstede, Gudykunst, Triandis, and Edward T.Hall, this research will try to analyse how culture influences the way users prefer to interact using a groupware as a remote collaboration tools.

Paper Nr: 382
Title:

USING CULTURAL DIFFERENCES TO JOIN PEOPLE WITH COMMON INTERESTS OR PROBLEMS IN ENTERPRISE ENVIRONMENT

Authors:

Gilberto Astolfi, Vanessa M. A. de Magalhães, Marcos A. R. Silva and Junia C. Anacleto

Abstract: This paper describes a methodology to identify people with common subjects but different cultures. Each person has a culture and it influences in his way to express himself or to write, etc. Because of that, nowadays, on the Web, it is very difficult a person who writes ’validating enterprise system’ finds another who writes ’attesting B2B solution’ because there are different words. On the other hand, there is a common subject that search engines can not identify, because it is necessary taking into consideration the cultural knowledge and experience from each person. In this context, the methodology presented here intends to consider this cultural information in order to identify when two or more people are talking about the same subject.

Paper Nr: 431
Title:

AUTOMATIC DIALOG ACT CORPUS CREATION FROM WEB PAGES

Authors:

Pavel Král and Christophe Cerisara

Abstract: This work presents two complementary tools dedicated to the task of textual corpus creation for linguistic researches. The chosen application domain is automatic dialog acts recognition, but the proposed tools might also be applied to any other research area that is concerned with dialogs processing. The first software captures relevant dialogs from freely available resources on the World Wide Web. Filtering and parsing of these web pages is realized thanks to a set of hand-crafted rules. A second set of rules is then applied to achieve automatic segmentation and dialog act tagging. The second software is finally used as a post-processing step to manually check and correct tagging errors when needed. In this paper, both softwares are presented, and the performances of automatic tagging are evaluated on a dialog corpus extracted from an online Czech journal. We show that reasonably good dialog act labeling accuracy may be achieved, hence greatly reducing the cost of building such corpora.

Posters
Paper Nr: 39
Title:

READABILITY METRICS FOR WEB APPLICATIONS ACCESSIBILITY

Authors:

Miriam Martínez, José R. Hilera and Luis Fernández-Sanz

Abstract: In this work, an analysis of applicability of specific metrics to evaluation of understandability of web content expressed as text, one of the key characteristics of accessibility according to WAI, is presented. Results of application of metrics to check level of understanding of pages in English of different universities are discussed.

Paper Nr: 45
Title:

DESIGN METHOD ANALYSIS OF WEB INFORMATION SYSTEM FOR PEOPLE WITH DISABILITIES

Authors:

Gatis Vitols

Abstract: This paper refers to improvement of human-system interaction, more specifically, improving Web information system accessibility for people with disabilities. Besides fast developing Web, the problem of system usability remains live issue. There is a need to improve usability and accessibility of information systems. One part of an active matter is question about how to make Web resource more usable for people with disabilities. Importance of this topic is out of question, because besides wide spread interpretation of human with disability, sometimes it is forgotten that even color blindness is or some aging people have some sort of disability. This paper identifies groups of people with disabilities who use Web information systems and problems that they meet. By analyzing problems that this group of people meet, main categories of needs is brought forward, analyzed and basic solutions for Web developers are discussed.

Paper Nr: 109
Title:

INFORMATION SECURITY AWARNESS IN DIFFERENT HIGHER EDUCATION CONTEXTS - A Comparative Study

Authors:

Adam Marks and Yacine Rezgui

Abstract: Higher education in the UAE has advanced significantly in the last decade. Several higher education institutions have been seeking/granted international recognition and accreditation. Yet, the status of IT in UAE higher education has received very limited attention. This study explores and compares the level of IS security awareness of IS users in one of the leading UAE higher education institutions to those of IS users in a classical UK university. In addition, the study looks for possible factors behind any possible variations on the levels of IS security awareness.

Paper Nr: 228
Title:

PORTUGUESE WEB ACCESSIBILITY SNAPSHOT - Status of the Portuguese Websites Regarding Accessibility Levels

Authors:

Ramiro Gonçalves, José Martins, Manuel Martins, Jorge Pereira and Henrique Mamede

Abstract: The Internet is extremely important for the publishing of information and for the interaction between the society elements. Due to this, it’s essential that the web presents itself accessible to all, including those with any kind of disability. An accessible web may help the handicapped citizens interact with the society in a more active way. This paper presents a study made with a universe of 777 Portuguese biggest enterprises, using a W3C referenced tool (SortSite), and presents some statistical results according to the Portuguese Classification of the Enterprises Activities. The achieved results present a serious troubled reality that’s preventing the disabled citizens from having the same access rights to the World Wide Web as the “non-disabled” citizens.

Paper Nr: 236
Title:

MANAGEMENT INFORMATION SYSTEMS IN HIGHER EDUCATION - Key Factors of User Acceptance

Authors:

Elisabeth Milchrahm

Abstract: The pan-European management of higher education has resulted in management information systems being developed by the universities to administer courses and examinations more effectively and more efficiently. Management information systems in universities have to meet particular requirements, as they not only have to ensure that large volumes of data are managed smoothly; they also have to take account of complex decision-making structures. Object of research of the present study is the most widely distributed university management information system in Austria. The aim is to analyse user acceptance of students based on the following key factors identified: usefulness, ease of use, trust, registration/cancellation methods and mandatory use. Drawing on statistical data of more than 1,100 questionnaires the survey focuses on the critical success factors and provides recommendations for measures to encourage acceptance of management information systems.

Paper Nr: 299
Title:

AN ONLINE GAME FOR TEACHING BUSINESS MANAGEMENT CONCEPTS

Authors:

Pedro Campos

Abstract: In this paper, we describe the result of a multi-disciplinary approach to designing a particular class of educa-tional games: business management games. The approach was based on intensive collaboration and co-design meetings with business management researchers. The result was a Web-based game called “SimCompany”, aimed at teaching children about business management concepts, thus promoting an entrepreneurship culture in classroom settings and beyond. “SimCompany” proved effective as a teaching tool about business management concepts, and initial evaluation showed a positive increase in students’ rate of learning, compared to traditional teaching methods.

Paper Nr: 378
Title:

EMD OVERSHOOT EFECT IN ERP DETECTION ERP - Detection related Specifics of the Empirical Mode Decomposition in EEG Analysis

Authors:

Jindrich Ciniburk

Abstract: Event related potentials (ERPs) are detected from continuous EEG.Most common method for ERPs detection is averaging. But this method is not suitable for single trial detection, because it requires lot of epochs. When we are performing attention experiments, it is required to detect ERPs ideally from single epoch. To detect ERP means determine its amplitude and latency. EEG signal is quasi-stationary therefore it is necessary to use signal processing methods designed for this task. We decided to use Hilbert-Huang transform. Its capabilities and problematic for ERP detection are discussed in the paper.

Paper Nr: 399
Title:

PERIPHERAL VISION PATTERN DETECTION DYNAMIC TEST

Authors:

João P. Rodrigues, João D. Semedo, Fernando M. Melicio and Agostinho C. da Rosa

Abstract: This work proposes a test that evaluates how well a subject can recognize and relate objects in the peripheral and foveal field while focused on some different task and how well this subject can make decisions based on this visual information. Although there exist a few peripheral vision tests in ophthalmology for checking the homogeneity and the reach of the vision field, these professional or clinical grade tests need a fixing or resting system to immobilize the head and also to instruct to the subject to gaze on a reference point. This test doesn’t evaluate the homogeneity of the visual field alone but also how well the information that is visually acquired is processed. Automatic detection of ocular movement is used to separate the results due to peripheral vision from those due to central vision. This test was applied to twelve junior soccer players and successfully identified those that used more peripheral vision, eye scanning or those that didn’t want to collaborate and clicked randomly.

Paper Nr: 446
Title:

SUPPORTING MENU LAYOUT DESIGN BY GENETIC PROGRAMMING

Authors:

Cosimo Birtolo, Roberto Armenise and Luigi Troiano

Abstract: Graphical User Interfaces heavily rely on menus to access application functionalities. Therefore designing properly menus poses relevant usability issues to face. Indeed, trading off between semantic preferences and usability makes this task not so easy to be performed. Meta-heuristics represent a promising approach in assisting designers to specify menu layouts. In this paper, we propose a preliminary experience in adopting Genetic Programming as a natural means for evolving a menu hierarchy towards optimal structure.