ICEIS 2010 Abstracts


Area 1 - Databases and Information Systems Integration

Full Papers
Paper Nr: 16
Title:

Multi-Process Optimization via Horizontal Message Queue Partitioning

Authors:

Matthias Boehm, Matthias Böhm, Wolfgang Lehner and Dirk Habich

Abstract: Message-oriented integration platforms execute integration processes---in the sense of workflow-based process specifications of integration tasks---in order to exchange data between heterogeneous systems and applications. The overall optimization objective is throughput maximization, i.e., maximizing the number of processed messages per time period. Here, moderate latency time of single messages is acceptable. The efficiency of the central integration platform is crucial for enterprise data management because both the data consistency between operational systems as well as the up-to-dateness of analytical query results depend on it. With the aim of integration process throughput maximization, we propose the concept of multi-process optimization (MPO). In this approach, messages are collected during a waiting period and executed in batches to optimize sequences of process instances of a single process plan. We introduce a horizontal---and thus, value-based---partitioning approach for message batch creation and show how to compute the optimal waiting time with regard to throughput maximization. This approach significantly reduces the total processing time of a message sequence and hence, it maximizes the throughput while accepting moderate latency time.
Download

Paper Nr: 119
Title:

SOFIA: AGENT SCENARIO FOR FOREST INDUSTRY

Authors:

Sergiy Nikitin, Vagan Terziyan and Minna Lappalainen

Abstract: Current economical situation in Finnish forest industry desperately calls for higher degree of efficiency in all stages of the production chain. The competitiveness of timber-based products directly and heavily depends on the raw material cost. At the same time, the successes of companies, that use timber, determine the volumes of the raw wood consumption and, therefore, drive forest markets. However, wood consuming companies (e.g. paper producers) can not unilaterally dictate logging and transportation prices to their contractors, because profitability of those, has already reached its reasonable margins . Recent research conducted in 2005-2008 shows extremely high degree of inefficiency in logistic operations amongst logging and transportation companies. Some of them have already realized the need for cooperative optimization, which calls for cross-company integration of existing information and control systems; however privacy and trust issues prohibit those companies from taking the open environment solutions. Therefore, the researchers have suggested new mediator-based business models that leverage the utilization and preserve current state of affairs at the same time. New business solutions for logistic optimization can be built, when a unified view on the market players is possible. Nevertheless, with fast development of communications, RFID and sensor technologies, forest industry sector is experiencing a technological leap. The adoption of innovative technologies opens possibilities for enactment of new business scenarios driven by bleeding edge ICT tools and technologies. We introduce an application scenario of the semantic agent platform called UBIWARE to the forest industry sector of Finland.
Download

Paper Nr: 130
Title:

Workflow Management Issues in Virtual Enterprise Networks

Authors:

Andre Kolell and Jeewani A. Ginige

Abstract: Increasing competitive pressure and availability of Internet and related technologies have stimulated the collaboration of independent businesses. Such collaborations, aimed at achieving common business goals, are referred to as virtual enterprise networks (VENs). Though web is an excellent platform to collaborate, the requirements of VENs regarding workflow management systems are in excess those of autonomous organizations. This paper provides a comprehensive overview of numerous issues related to workflow managements in VENs. These issues are discussed in the three phases of virtual enterprise lifecycle: configuration, operation and dissolution; and corroborated by two real case studies of VENs in Australia.
Download

Paper Nr: 151
Title:

Access Rights in Enterprise Full-Text Search

Authors:

Jan Kasprzak, Matěj Čuhel, Tomáš Obšívač and Michal Brandejs

Abstract: One of the toughest problems to solve when deploying an enterprise-wide full-text search system is to handle the access rights of the documents and intranet web pages correctly and effectively. Post-processing the results of general-purpose full-text search engine (filtering out the documents inaccessible to the user who sent the query) can be an expensive operation, especially in large collections of documents. We discuss various approaches to this problem and propose a novel method which employs virtual tokens for encoding the access rights directly into the search index. We then evaluate this approach in an intranet system with several millions of documents and a complex set of access rights and access rules.
Download

Paper Nr: 202
Title:

PROCESS-BASED DATA-STREAMING IN SERVICE-ORIENTED ENVIRONMENTS

Authors:

Steffen Preissler, Wolfgang Lehner and Dirk Habich

Abstract: Service-oriented environments increasingly become the central point for enterprise-related workflows. This also holds for data-intensive service applications, whereas such process type attempt performance and resource issues. To tackle these issues on a more conceptual level, we propose a stream-based process execution that is inspired by the typical execution semantic in data management environments. More specifically, we present a data model and a process model including a generalized concept for stream-based services. In our evaluation, we show that our approach outperforms the execution model of current service-oriented process environments.
Download

Paper Nr: 217
Title:

A platform dedicated to share and mutualize environmental applications

Authors:

Jean Christophe Desconnets, Christelle Pierkot, Yuan Lin, Thérèse Libourel, Isabelle Mougenot, Christelle Pierkot and Jean-Christophe Desconnets

Abstract: Scientists of the environmental domains (biology, geographical information, etc.) need to capitalize, distribute and validate their scientific experiments of varying complexities. A multi-function platform will be an adaptable candidate for replying this request. After a short introduction of our context and objective, this article presents the project MDWeb, a platform that we have conceived and realized for sharing and mutualizing geographic data. Based on this platform, our main interest is actually focused on providing users a workflow environment, which will be integrated soon after in this platform as a functional component. An introduction to a three-level workflow environment architecture (static, intermediate, dynamic) is presented. In this article, we focus mainly on the ”static” level, which concerns the first phase of constructing a business process chain, and a discussion around the ”intermediate” level, which covers both the instantiation of a business process chain and the validation, in terms of conformity, of the generated chain.
Download

Paper Nr: 252
Title:

ONTOP: A PROCESS TO SUPPORT ONTOLOGY CONCEPTUALIZATION

Authors:

Elis M. Hernandes, Deysiane Sande and Sandra Fabbri

Abstract: Although there are tools that support the ontology construction, such tools do not necessarily take heed to the conceptualization phase in its need of execution resources. The objective of this paper is to present the ONTOP Process (ONTOlogy conceptualization Process) as an effective means of enhancing the conceptualization phase of the ontology construction. This process is supported by the ONTOP-Tool which provides an iterative way to defining a collaborative glossary and uses a visual metaphor to facilitate the identification of the ontology components. Once the components are defined, it is possible to generate an OWL file that can be used as an input to other ontology editors. The paper also presents an application of the both process and the tool, which emphasizes the contributions of this proposal.
Download

Paper Nr: 262
Title:

PW-PLAN: A STRATEGY TO SUPPORT ITERATION-BASED SOFTWARE PLANNING

Authors:

Deysiane Sande, Arnaldo Sanchez, Renan Montebelo, Sandra Fabbri and Elis M. Hernandes

Abstract: Background: Although there are many techniques in the literature that support software size estimation, iteration-based software development planning is still based on developers’ personal experience in most companies. Particularly for the agile methods, iterations estimation must be as precise as possible, since the success of this kind of development is intrinsically related to this fact. Aim: In order to establish a systematic planning of iterations, this article presents the PW-Plan (Piece of Work Planning) strategy. This strategy is based on four items: the iterative development, the use of a technique to estimate the complexity of the work to be done, the adoption of personal planning practices and the constant evaluation of the Effort Level (EL). Method: PW-Plan evolved from another strategy that was elaborated based on the systematic practice of using Use Case Points, Personal Software Process and constant EL evaluation. Results: PW-Plan was used by two small businesses companies in two case studies and showed that its application is feasible from the practical point of view and that it enhances the development control. Conclusion: The case studies provide insights of the PW-Plan contribution for both the developer’s and the manager’s processes. Furthermore, the strategy application provides more precise estimations for each iteration.
Download

Paper Nr: 347
Title:

EVALUATING THE QUALITY OF FREE/OPEN SOURCE ERP SYSTEMS

Authors:

Maria Tortorella, Lerina Aversano and Igino Pennino

Abstract: The selection and adoption of open source ERP projects can significantly impact the competitiveness of organizations. Small and Medium Enterprises have to deal with major difficulties due to the limited resources available for performing the selection process. This paper proposes a framework for evaluating the quality of Open Source ERP systems. The framework is obtained through a specialization of a more general one, called EFFORT (Evaluation Framework for Free/Open souRce projects). The usefulness of the framework is investigated through a case study.
Download

Paper Nr: 391
Title:

ENGINEERING PROCESS FOR CAPACITY-DRIVEN WEB SERVICES

Authors:

Moez Yeddes, Imen Benzarti, Samir Tata, Zakaria Maamar, Nejib Ben Hadj-Alouane and Moez Yeddes

Abstract: This paper presents a novel approach for the engineering of capacity-driven Web services. By capacity, we mean how a Web service is empowered with several sets of operations from which it selectively triggers a set of operations with respect to some run-time environmental requirements. Because of the specificities of capacity-driven Web services compared to regular (i.e., mono-capacity) Web services, their engineering in terms of design, development, and deployment needs to be conducted in a complete specific way. Our approach define an engineering process composed of five steps: (1) to frame the requirements that could be put on these Web services, (2) to define capacities and how these capacities are triggered, and last but not least link these capacities to requirements (3) to identify the processes in term of business logic that these Web services could implement (4) to generate the source code and (5) to generate the Capacity-driven Web Services Description Language (C-WSDL).
Download

Paper Nr: 392
Title:

FLEXIBLE DATA ACCESS IN ERP SYSTEMS

Authors:

Vadym Borovskiy, Wolfgang Koch and Alexander Zeier

Abstract: Flexible data access is a necessary prerequisite to satisfy a number of acute needs of ERP system users and application developers. However, currently available ERP systems do not provide the ability to access and manipulate ERP data at any granularity level. This paper contributes with the concept of query-like service invocation implemented in the form of a business object query language (BOQL). Essentially, BOQL provides on-the-fly orchestration of CRUD-operations of business objects in an ERP system and allows to achieve both the flexibility of SQL and encapsulation of SOA. To demonstrate the power of the suggested concept navigation, configuration and composite application development scenarios are presented in the paper. All suggestions have been prototyped with Microsoft .Net platform.
Download

Paper Nr: 393
Title:

A Distributed Algorithm for Formal Concepts Processing based on Search Subspaces

Authors:

Nilander Moraes, Luis Z. Gálvez and Henrique Cota de Freitas

Abstract: The processing of dense contexts is a common problem in Formal Concepts Analysis. From input contexts, all possible combinations must be evaluated in order to obtain all correlations between objects and attributes. The state-of-the-art shows that this problem can be solved through distributed processing. Partial concepts would be obtained from a distributed environment composed of machine clusters in order to achieve the final set of concepts. Therefore, the goal of this paper is to propose, develop, and evaluate a distributed algorithm with high performance to solve the problem of dense contexts. The speedup achieved through the distributed algorithm shows an improvement of performance, but mainly, a high-balance workload which reduces the processing time considerably. For this reason, the main contribution of this paper is the distributed algorithm, capable of accelerating the processing for dense formal contexts.
Download

Short Papers
Paper Nr: 29
Title:

SW-ONTOLOGY: A PROPOSAL FOR SEMANTICS MODELING OF A SCIENTIFIC WORKFLOW MANAGEMENT SYSTEM

Authors:

Wander Gaspar, Wander Gaspar, Regina Braga, Laryssa Silva and Fernanda Campos

Abstract: The execution of scientific experiments based on computer simulations constitutes an important contribution to the scientific community. In this sense, the implementation of a scientific workflow can be automated by Scientific Workflow Management Systems, which goal is to provide the orchestration of all processes involved. It aims to capture the semantics involved on the implementation of scientific workflows using ontologies that could capture the knowledge involved in these processes. Specifically, we present a prototype of an ontology based on a design pattern called “Model View” for the representation of knowledge in scientific workflow management systems.
Download

Paper Nr: 59
Title:

Solutions for speeding-up on-line dynamic signature authentication

Authors:

Valentin Andrei, Valentin Andrei, Sorin M. Rusu and Stefan Diaconescu

Abstract: The article presents a study and its experimental results, over methods of speeding-up authentication of the dynamic handwritten signature, in an on-line authentication system. We describe 3 solutions, which use parallel computing by choosing a 16 processor server, a FPGA development board and a graphics card, designed with nVidia CUDA technology. For each solution, we detail how can we integrate it into an authentication provider system, and we specify it-s advantages and disadvantages.
Download

Paper Nr: 89
Title:

INTEGRATION OF REPOSITORIES IN ELEARNING SYSTEMS

Authors:

José P. Leal and Ricardo Queirós

Abstract: The wide acceptance of digital repositories today in the eLearning field raises several interoperability issues. In this paper we present the interoperability features of a service oriented repository of learning objects called crimsonHex. These features are compliant with the existing standards and we propose extensions to the IMS interoperability recommendation, adding new functions, formalizing message interchange and providing also a REST interface. To validate the proposed extensions and its implementation in crimsonHex we developed a repository plugin for Moodle 2.0 that is expected to be included in the next release of this popular learning management system.
Download

Paper Nr: 90
Title:

Collaborative Knowledge Evaluation with a Semantic Wiki: WikiDesign

Authors:

Abder Koukam, Vincent Hilaire, Samuel Gomes and Monticolo Davy

Abstract: We will present in this paper how to ensure a knowledge evaluation and evolution in a knowledge management system by using a Semantic Wiki approach. We describe a Semantic Wiki called WikiDesign which is a component of a Knowledge Management system. Currently WikiDesign is use in engineering department of companies to emphasize technical knowledge. In this paper, we will explain how WikiDesign ensures the reliability of the knowledge base thanks to a knowledge evaluation process. After explaining the interest of the use of semantic wikis in knowledge man-agement approach we describe the architecture of WikiDe-sign with its semantic functionalities. At the end of the paper, we prove the effectiveness of WikiDesign with a knowledge evaluation example for an industrial project
Download

Paper Nr: 95
Title:

CLUX: CLUSTERING XML SUB-TREES

Authors:

Rita Hartel, Stefan Böttcher, Rita Hartel and Christoph Krislin

Abstract: XML has become the de facto standard for data exchange in enterprise information systems. But whenever XML data is stored or processed, e.g. in form of a DOM tree representation, the XML markup causes a huge blow-up of the memory consumption compared to the data, i.e., text and attribute val¬ues, contained in the XML document. In this paper, we present CluX, an XML compression approach based on clustering XML sub-trees. CluX uses a grammar for sharing similar substructures within the XML tree structure and a cluster-based heuristics for greedily selecting the best compression options in the grammar. Thereby, CluX allows for storing and exchanging XML data in a space effi¬cient and still queryable way. We evaluate different strategies for XML struc¬ture sharing, and we show that CluX often compresses better than XMill, Gzip, and Bzip2, which makes CluX a promising technique for XML data exchange whenever the exchanged data volume is a bottleneck in enterprise information systems.
Download

Paper Nr: 103
Title:

IMPROVING REAL WORLD SCHEMA MATCHING WITH DECOMPOSITION PROCESS

Authors:

Sana Sellami, Aïcha-Nabila Benharkat, Frederic Flouvat and Youssef Amghar

Abstract: This paper tends to provide an answer to a difficult problem: Matching large XML schemas. Scalable Matching acquires a long execution time other than decreasing the quality of matches. In this paper, we propose an XML schema decomposition approach as a solution for large schema matching problem. The presented approach identifies the common structures between and within XML schemas, and decomposes these input schemas. Our method uses tree mining techniques to identify these common structures and to select the most relevant sub-parts of large schemas for matching. As proved by our experiments in e-business domain, the proposed approach improves the performance of schema matching and offers a better quality of matches in comparison to other existing matching tools.
Download

Paper Nr: 107
Title:

CHIEF INFORMATION OFFICER PROFILE IN HIGHER EDUCATION

Authors:

Adam Marks, Adam Marks and Yacine Rezgui

Abstract: The purpose of this paper is to provide an unbiased picture of what many universities seek and expect in terms of requirements, attributes, competencies, and expected functions and duties in IT executive candidates, namely the Chief Information Officer (CIO). The paper provides a benchmark that can be used by universities to compare or build their IT leadership profiles. The study examines the education, experience, and other skills requirements in 374 active and archived electronic web advertisements for the position of CIO. It also looks at the expected job requirements for that role. The findings suggest that universities’ criteria and requirements for CIOs may vary based on factors such as the organization size and needs, the complexity of the organizational structure, and available budget, but are overall comparable with those of other industries.
Download

Paper Nr: 115
Title:

CONSTRAINT CHECKING FOR NON-BLOCKING TRANSACTION PROCESSING IN MOBILE AD-HOC NETWORKS

Authors:

Sebastian Obermeier and Stefan Böttcher

Abstract: Whenever business transactions involve databases located on different mobile devices in a mobile ad-hoc network, transaction processing should guarantee the following: atomic commitment and isolation of distributed transactions and data consisteny across different mobile devices. However, a major problem of distributed atomic commit protocols in mobile network scenarios is infinite transaction blocking, which occurs when a local sub-transaction that has voted for commit cannot be completed due to the loss of commit messages and due to network partitioning. For such scenarios, Bi-State-Termination has been recently suggested to terminate pending and blocked transactions, which allows to overcome the infinite locking problem. However, if the data distributed on different mobile devices has to be consistent according to some local or global database consistency constraints, Bi-State-Termination has not been able to check for the validity of these consistency constraints on a database state involving the data of different mobile devices. Within this paper, we extend the concept of Bi-State-Termination to arbitrary read operations. We show how to handle several types of database consistency constraints, and experimentally evaluate our constraint checker using the TPC-C benchmark.
Download

Paper Nr: 120
Title:

ABSORPTION OF INFORMATION PROVIDED BY BUSINESS INTELLIGENCE SYSTEMS: The Effect of Information Quality on the Use of Information in Business Processes

Authors:

Jurij Jaklic, Pedro S. Coelho and Ales Popovic

Abstract: The fields of business intelligence and business intelligence systems have been gaining relative significance in the scientific area of decision support and decision support systems. In order to better understand mechanisms for providing benefits of business intelligence systems, this research establishes and empirically tests a model of business intelligence systems’ maturity impact on the use of information in organizational operational and managerial business processes, where this effect is mediated by information quality. Based on empirical investigation from Slovenian medium and large-size organizations the proposed structural model has been analyzed. The findings suggest that business intelligence system maturity positively impacts both segments of information quality, yet the impact of business intelligence system maturity on information media quality is greater than the impact on content quality. Moreover, the impact of information content quality on the use of information is much larger than the impact of information media quality. Consequently, when introducing business intelligence systems organizations clearly need to focus more on information content quality issues than they do currently.
Download

Paper Nr: 120
Title:

ABSORPTION OF INFORMATION PROVIDED BY BUSINESS INTELLIGENCE SYSTEMS: The Effect of Information Quality on the Use of Information in Business Processes

Authors:

Jurij Jaklic, Pedro S. Coelho and Ales Popovic

Abstract: The fields of business intelligence and business intelligence systems have been gaining relative significance in the scientific area of decision support and decision support systems. In order to better understand mechanisms for providing benefits of business intelligence systems, this research establishes and empirically tests a model of business intelligence systems’ maturity impact on the use of information in organizational operational and managerial business processes, where this effect is mediated by information quality. Based on empirical investigation from Slovenian medium and large-size organizations the proposed structural model has been analyzed. The findings suggest that business intelligence system maturity positively impacts both segments of information quality, yet the impact of business intelligence system maturity on information media quality is greater than the impact on content quality. Moreover, the impact of information content quality on the use of information is much larger than the impact of information media quality. Consequently, when introducing business intelligence systems organizations clearly need to focus more on information content quality issues than they do currently.
Download

Paper Nr: 121
Title:

AN OPEN SOURCE SOFTWARE BASED LIBRARY CATALOGUE SYSTEM USING SOUNDEXING RETRIEVAL AND QUERY CACHING

Authors:

Jiansheng Huang, Yan Zhang and Zhenghui Pan

Abstract: It has been a challenge to apply effective knowledge management tools in information systems for modern libraries that deliver the most up-to-date, relevant information to end users in a quick, efficient, and user-friendly manner. In this paper, the authors present the design of a library catalogue system using principally open source software. The integrated web-based library system takes client/server architecture with multiple tiers. For performance enhancement with respect to error tolerance, searching speed and scalability, techniques of soundexing retrieval and query caching were applied. With the support of an appropriately designed soundex algorithm, the catalogue system can largely increase its recall while not compromising the search precision. On the other side, the introduced query cache speeds up the system response significantly.
Download

Paper Nr: 129
Title:

Use Data Mining to improve student Retention in HE—A Case Study

Authors:

Ying zhang, Samia Oussena, Tony Clark and Hyeonsook Kim

Abstract: Data mining combines machine learning, statistical and visualization techniques to discover and extract knowledge. One of the biggest challenges that higher education faces is to improve student retention[12]. Student retention has become an indication of academic performance and enrollment management. Our project uses data mining and natural language processing technologies to monitor, analysis student academic behavior and provide a basis for efficient intervention strategies. Our aim is to identify potential problems as early as possible and to follow up with intervention options to enhance student retention. In this paper we discuss how data mining can help spot students ‘at risk’, evaluate the course or module suitability, and tailor the interventions to increase student retention.
Download

Paper Nr: 147
Title:

MIGRATING LEGACY SYSTEMS TO A SERVICE-ORIENTED ARCHITECTURE WITH OPTIMAL GRANULARITY

Authors:

Saad Alahmari, David De Roure and Ed E. Zaluska

Abstract: The enhanced interoperability of business systems based on Service-Oriented Architecture (SOA) has created an increased demand for the re-engineering and migration of legacy software systems to SOA-based systems. Existing approaches focus mainly on defining coarse-grained services corresponding to business requirements, and neglect the importance of optimising service granularity based on service reusability, governance, maintainability and cohesion. An improved migration of legacy systems onto SOA-based systems requires identifying the ‘right’ services with an appropriate level of granularity. This paper proposes a novel framework for the effective identification of the key services in legacy code to provide such an optimal mapping. The framework focuses on defining these services (based on standardized modelling languages UML and BPMN) and provides effective guidelines for identifying optimal service granularity over a wide range of possible service types.
Download

Paper Nr: 154
Title:

WHAT-IF ANALYSIS IN OLAP: with a case study in supermarket sales data

Authors:

Emiel Caron, Hennie Daniels and Emiel Caron

Abstract: Today's OnLine Analytical Processing (OLAP) or multi-dimensional databases have limited support for what-if or sensitivity analysis. What-if analysis is the analysis of how the variation in the output of a mathematical model can be assigned to different sources of variation in the model's input. This functionality would give the OLAP analyst the possibility to play with ``What if ...?''-questions in an OLAP cube. For example, with questions of the form: ``What happens to an aggregated value in the dimension hierarchy if I change the value of this data cell by so much?'' These types of questions are, for example, important for managers that want to analyse the effect of changes in sales on a product's profitability in an OLAP supermarket sales cube. In this paper, we extend the functionality of the OLAP database with what-if analysis.
Download

Paper Nr: 169
Title:

GOING VIRTUAL: Popular Trend or Real Prospect for Enterprise Information Systems

Authors:

Mariana Carroll, Paula Kotze and Alta van der Merwe

Abstract: Organisations are faced with a number of challenges and issues in decentralised, multiple-server, physical, non-virtualized IT environments. Virtualization in recent years has had a significant impact on computing environments and has introduced benefits, including server consolidation, server and hardware utilization and reduced costs. Virtualization’s popularity has led to its growth in many IT environments. This paper provides an overview of the IT challenges in non-virtualized environments and addresses the question of whether virtualization provides the solution to these IT challenges.
Download

Paper Nr: 173
Title:

MANAGEMENT OF THE ORGANIZATIONAL KNOWLEDGE THROUGH PROCESS REUSE USING CASE-BASED REASONING

Authors:

Viviane A. Santos and Mariela Cortés

Abstract: Software process reuse involves different aspects of the knowledge obtained from generic process models and previous successful projects. The benefit of reuse is reached by the definition of an effective and systematic process to specify, produce, classify, retrieve and adapt software artifacts for utilization in another context. In this work we present a formal approach for software process reuse to assist the definition, adaptation and improvement of the organization’s standard process. A tool based on the Case-Based Reasoning technology is used to manage the collective knowledge of the organization.
Download

Paper Nr: 176
Title:

ATTACK SCENARIOS FOR POSSIBLE MISUSE OF PERIPHERAL PARTS IN THE GERMAN HEALTH INFORMATION INFRASTRUCTURE

Authors:

Ali Sunyaev, Alexander Kaletsch, Sebastian Dünnebeil and Helmut Krcmar

Abstract: This paper focuses on functional issues within the peripheral parts of the German health information infrastructure, which compromise security and patient’s information safety or might violate law. Our findings demonstrate that a misuse of existing functionality is possible. With examples and detailed use cases we show that the health infrastructure can be used for more than just ordinary electronic health care services. In order to investigate this evidence from the laboratory, we tested all attack scenarios in a typical German physician’s practice. Furthermore, security measures are provided to overcome the identified threats and questions regarding these issues are discussed.
Download

Paper Nr: 185
Title:

ESTIMATING SOFTWARE DEVELOPMENT EFFORT USING TABU SEARCH

Authors:

Filomena Ferrucci, Rocco Oliveto, Federica Sarro and Carmine Gravino

Abstract: Some studies have been recently carried out to investigate the use of search-based techniques in estimating software development effort and the results reported seem to be promising. Tabu Search is a meta-heuristic approach successful used to address several optimization problems. In this paper, we report on an empirical analysis carried out exploiting Tabu Search on two publicity available datasets, i.e., Desharnais and NASA. On these datasets, the exploited Tabu Search settings provided estimates comparable with those achieved with some widely used estimation techniques, thus suggesting for further investigations on this topic.
Download

Paper Nr: 192
Title:

Fostering IT-Enabled Business Innovations - An Approach for CIOs to Innovate the Business

Authors:

Michael Lang and Michael Amberg

Abstract: Nowadays companies are in worldwide competition for innovations which are essential to ensure their competitiveness and consequently their business success. Information Technology (IT) plays an important role in this context. Thus, it is expected from IT organizations to provide an own value contribution to the corporate performance by delivering innovations. This raises the questions how chief information officers (CIO) can facilitate the generation of IT-enabled business innovations. As a result of our research, we identified 22 factors concerning the relevant aspects in IT organizations - IT projects, IT systems, IT processes, IT services, IT personnel and aspects referring to the organizational structure. These factors help CIOs to foster the generation of IT-enabled innovations and therefore should be considered in the management of IT organizations.
Download

Paper Nr: 199
Title:

AdaptIE - Using Domain Language concept to enable Domain Experts in Modeling of Information Extraction Plans

Authors:

Felix Foester, Wojciech Barczynski, Daniel Schuster, Felix Foerster and Falk Brauer

Abstract: Implementing domain specific Information Extraction (IE) technologies to retrieve structured information from unstructured data is a challenging and complex task. It requires both IE expertise (e.g., in linguistics) and domain knowledge, provided by a domain expert who is aware of, say, the text corpus specifics and entities of interest. While the IE expert role is addressed by several approaches, less has been done in enabling domain experts in the process of IE development. Our approach targets this issue. We provide a base platform for collaboration of experts through IE plan modeling languages used to compose basic IE operators into complex IE flows. We provide each of the experts with a language that is adapted to their respective expertise. IE experts leverage a fine grained view and domain experts use a coarse grain view on execution of IE. We use Model Driven Architecture concept to enable transition among the languages and operators provided by an algebraicIE framework. To prove applicability of our approach we implemented an Eclipse based tool – AdaptIE– and demonstrate it in a real world scenario for the SAP Community Network.
Download

Paper Nr: 229
Title:

EFFICIENT SEMI-AUTOMATIC MAINTENANCE OF MAPPING BETWEEN ONTOLOGIES IN A BIOMEDICAL ENVIRONMENT

Authors:

Imen Ketata, Abdelkader Hameurlain, Riad Mokadem and Franck Morvan

Abstract: In dynamic environments like data management in biomedical domain, adding a new element (e.g. concept) to an ontology O1 requires significant mapping creations between O1 and the ontologies linked to it. To avoid this mapping creation for each element addition, old mappings can be reused. Hence, the nearest element w to the added one should be retrieved in order to reuse its mapping schema. In this paper, we deal with the existing additive axiom which can be used to retrieve this w. However, in such axiom, the usage of some parameters like the number of element occurrence appears insufficient. We introduce the calculation of similarity and the user’s opinion note in order to have more precision and semantics in the w retrieval. An illustrative example is presented to estimate our contribution.
Download

Paper Nr: 235
Title:

AN INCREMENTAL PROCESS MINING ALGORITHM

Authors:

André Kalsing, Lucineia Heloisa Thom and Cirano Iochpe

Abstract: A number of process mining algorithms have already been proposed to extract knowledge from application execution logs. This knowledge includes the business process itself as well as business rules, and organizational structure aspects, such as actors and roles. However, existent algorithms for extracting business processes neither scale very well when using larger datasets, nor support incremental mining of logs. Process mining can benefit from an incremental mining strategy especially when the information system code is logically complex requiring a large dataset of logs in order for the mining algorithm to discover and present its complete business process behavior. Incremental process mining can also pay off when it is necessary to extract the complete business process model gradually by extracting partial models in a first step and integrating them into a complete model in a final step. This paper presents an incremental algorithm for mining business processes. The new algorithm enables the update as well as the enlargement, and improvement of a partial process model as new log records are added to the log file. In this way, processing time can be significantly reduced since only new event traces are processed rather than the complete log data.
Download

Paper Nr: 264
Title:

USING SQL/XML FOR EFFICIENTLY TRANSLATING QUERIES OVER XML VIEW OF RELATIONAL DATA

Authors:

Fernando Lemos, Clayton Costa and Vânia P. Vidal

Abstract: XML and Web services are becoming the standard means of publishing and integrating data over the Web. A convenient way to provide universal access (over the Web) to different data sources is to implement Web services that encapsulate XML views of the underlying relational data (Data Access Web Services). Given that most business data is currently stored in relational database systems, the generation of Web services for publishing XML view of relational data has special significance. In this work, we propose RelP, a framework for publishing and querying relational databases through XML views. The main contribution of the paper is an algorithm that translates XML queries over a published XML view schema into a single SQL/XML query over the data source schema.
Download

Paper Nr: 266
Title:

A Flexible Framework for Applying Data Access Authorization Business Rules

Authors:

Leonardo Azevedo, Sergio P. Filho, Raphael Thiago, Claudia Cappelli and Fernanda Baião

Abstract: This work proposes a flexible framework for managing and implementing data access authorization business rules on top of relational DBMSs, in an independent way for the applications accessing a database. The framework adopts the RBAC policy definition approach, and was implemented on Oracle DBMS. Therefore, data access security is built once in the data server, rather than in each application that accesses data, and is enforced by the database server. Experimental tests were executed using data and query of TPCH Database Benchmark and the results indicate the effectiveness of the proposal.
Download

Paper Nr: 279
Title:

A GENETIC PROGRAMMING APPROACH TO SOFTWARE COST MODELING AND ESTIMATION

Authors:

Andreas S. Andreou, Andreas S. Andreou, Efi Papatheocharous and Angela Iasonos

Abstract: This paper investigates the utilization of Genetic Programming (GP) as a method to facilitate better This paper investigates the utilization of Genetic Programming (GP) as a method to facilitate better software cost modeling and estimation. The aim is to produce and examine candidate solutions in the form of representations that utilize operators and operands, which are then used in algorithmic cost estimation. These solutions essentially constitute regression equations of software cost factors, used to effectively estimate the dependent variable, that is, the effort spent for developing software projects. The GP application generates representative rules through which the usefulness of various project characteristics as explanatory variables, and ultimately as predictors of development effort is investigated. The experiments conducted are based on two publicly available empirical datasets typically used in software cost estimation and indicate that the proposed approach provides consistent and successful results.
Download

Paper Nr: 314
Title:

BIDM: The Business Intelligence Development Model

Authors:

Catalina Sacu and Marco Spruit

Abstract: Business Intelligence (BI) has been a very dynamic and popular field of research in the last few years as it helps organizations in making better decisions and increasing their profitability. This paper aims at creating some structure in the BI field of research by creating a BI development model that relates the current BI development stages and their main characteristics. This framework can be used by organizations to identify their current BI stage and provide insight into how to improve their BI function.
Download

Paper Nr: 319
Title:

A Cyber-Physical System for Elders Monitoring

Authors:

Xiang Li, Ying Qiao and Hongan Wang

Abstract: In a new life style, elders can stay in multiple places instead of a single place. The new life style has two features, i.e. multiple scenarios and changes of status of elders, which lead challenges to traditional elder monitoring systems in cooperation and flexibility. This paper presents a new cyber-physical system for elders monitoring, which is divided to two layers, a sub-system layer and a global service layer. Active databases with real-time event-condition-action rule reasoning system are used as core components in sub-systems and the global service to detect risks and cooperate with other systems actively, intelligently, and at real-time. Structure of an ECA rule reasoning system is flexible, in which ECA rules can be adjusted on-the-fly. We discuss some properties in the cyber-physical system, including cooperation among sub-systems, flexibility and real-time property. At last, we present a case study to validate our works.
Download

Paper Nr: 361
Title:

Finding and Classifying Product Relationships Using Information from the Public Web

Authors:

Till M. Juchheim, Daniel Schuster, Till Juchheim and Alexander Schill

Abstract: Relationships between products such as accessory or successor products are hard to find on the Web or have to be inserted manually in product information systems. Finding and classifying such relations automatically using information from the public Web only offers great value for customers and vendors as it helps to improve the buying process at low cost. We present and evaluate algorithms and methods for product relationship extraction on the Web requiring only a set of clustered product names as input. The solution can be easily implemented in different product information systems most useful in but not necessarily restricted to the application domain of online shopping.
Download

Paper Nr: 380
Title:

Towards Location-Based Services standardization: An Application based on Mobility and Geo-Location

Authors:

El Kindi Rezig , El K. Rezig and Valérie Monfort

Abstract: Location based services (LBSs) have improved users’ life drastically because of their useful applications such as location-based advertising, vehicle tracking, mobile commerce..etc. However most LBS applications are provier-specific and can’t (generally) cohabit with other LBS applications from other providers, consequently, consumers have to use different platforms and applications in order to discover and interact with different LBS applications. In our paper, we present an infrastructure based on geo-location and Web services to promote the uniformity of the way LBS providers publish their services on one hand, and the way consumers discover and interact with the LBSs they look for on the other hand. Moreover the proposed infrastructure is context-aware and promotes dynamical services accessibility by offering consumers, as they are moving, the nearest services they are interested in.
Download

Paper Nr: 408
Title:

Defining an Unified Meta modeling Architecture for Deployment of Distributed Components-based Software applications

Authors:

Mariam Dibo and Noureddine Belkhatir

Abstract: Deployment is a complex process gathering activities to make applications operational after development, Today, the components approach and the distribution make deployment a very complex process. Many deployment tools exist but they are often built in an ad hoc way; i.e. specific to a technology or to an architecture and, covering partially the deployment life cycle. Hence there is an increased need for new techniques and tools to manage these systems. In this work, we focus on the deployment process describing a framework called UDeploy. UDeploy (Generic Deployment framework) is a framework based on a generic engine which permits firstly the carrying out of the planning process from meta-information related to the application and the infrastructure; secondly, the generation of specific deployment descriptors related to the application and the environment (i.e. the machines connected to a network where a software system is deployed); and finally the execution of a plan produced by means of deployment strategies. The work presented in this paper is focused on the presentation of a generic deployment architecture driven by meta-models and their transformations. In this respect, UDeploy is independent from any specific technology and, also from any specific platform characteristic.
Download

Paper Nr: 418
Title:

p-MDAG: A Parallel MDAG Approach

Authors:

Joubert Lima and Celso M. Hirata

Abstract: In this paper, we present a novel parallel full cube computation approach, named p-MDAG. The p-MDAG approach is a parallel version of MDAG sequential approach. The sequential MDAG approach outperforms the classic Star approach in dense, skewed and sparse scenarios. In general, the sequential MDAG approach is 25-35% faster than Star, consuming, on average, 50% less memory to represent the same data cube. The p-MDAG approach improves the runtime while keeping the low memory consumption; it uses an attribute-based data cube decomposition strategy which combines both task and data parallelism. The p-MDAG approach uses the dimensions attribute values to partition the data cube. It also redesigns the MDAG sequential algorithms to run in parallel. The p-MDAG approach provides both good load balance and similar sequential memory consumption. Its logical design can be implemented in shared-memory, distributed-memory and hybrid architectures with minimal adaptation.
Download

Paper Nr: 429
Title:

A PRACTICAL APPLICATION OF SOA: A COLLABORATIVE MARKETPLACE

Authors:

Sophie Rousseau, Olivier Camp and Slimane Hammoudi

Abstract: The economical context greatly impacts companies and their Information Systems (IS). Companies have new competitors or develop new business skills, delocalize whole or part of their organization. Moreover they are faced with powerful competitors, and new products, fitting customer needs, must be developped fast, sometimes in less than 3 months. These companies' ISs need to cope with such complex evolutions and have to overcome the resulting changes. Service-Oriented Architectures (SOA) are widely used by companies to gain in flexibility and Web services, by providing interoperability and loose coupling, is the fitted technical solution used to support SOA. Basic Web services are assembled into composite Web services in order to directly support business processes. The aim of this work is to present a practical and successful application of SOA towards the design of a collaborative marketplace. An implementation in the Oracle environment is achieved using the BPEL Process Manager allowing an integration of the different components involved in the design of a collaborative marketplace. In particular, this paper illustrates, through a practical example, how the SOA approach promotes interoperability, one of its main strengths, by integrating an open source ERP into a mainly ORACLE based software architecture.
Download

Paper Nr: 1
Title:

Interface Usability of a Visual Query Language for Mobile GIS

Authors:

Haifa E. Elariss and Souheil Khaddaj

Abstract: In recent years, many non-expert mobile applications have been deployed to query Geographic Information Systems (GIS) in particular Proximity Analysis that are concerned with the user who asks questions related to his current position by using a mobile phone. Thus, the new Iconic Visual Query Language (IVQL) has been developed and evaluated using a tourist application based on the map of Paris. The evaluation has been carried out to test the various usability aspects such as the expressive power of the language, the query formulation, and the user interface (GUI). The evaluation of the user interface that is hereby presented has been implemented through the user satisfaction of two subject groups, programmers and non-programmers. The results show that subjects found that the IVQL GUI has an excellent software, a good organization if icons, and is satisfying, with no significant difference between the two groups. The subjects also reported that they found the learning to operate the system easy, exploring new features easy, remembering the use of the icons easy, and performing tasks straightforward.
Download

Paper Nr: 18
Title:

Agility by virtue of Architecture Maturity

Authors:

Gouri Prakash

Abstract: This poster is to demonstrate that although agile software process models purposefully and actively react to changes to base lined software requirements, as opposed to resist changes, and that such models are meted with scepticism by process-oriented orthodox software engineering practitioners, the maturity of the underlying architecture of the system can be leveraged to deliver the value proposition that agile software process models offer. Based on the degree of architecture maturity and the degree of agility observed during the software project there are four types of projects, the characteristics of which are discussed in this poster.
Download

Paper Nr: 38
Title:

PERFORMANCE OVERHEAD OF PARAVIRTUALIZATION ON AN EXEMPLARY ERP SYSTEM

Authors:

Andre Boegelsack, Helmut Krcmar and Holger Wittges

Abstract: This paper addresses aspects of performance overhead when using paravirtualization techniques. To quantify the overhead the paper introduces and utilizes a new testing method, called the Zachmann test, to determine the performance overhead in a paravirtualized environment. The Zachmann test is used to perform CPU and memory intensive operations in a testing environment consisting of an exemplary Enterprise Resource Planning (ERP) system and a Xen hypervisor derivate. We focus on two issues: first, the performance overhead in general and second, the analysis of “overcommitment” situations. Our measurements show that the ERP system’s performance suffers up to 44% loss in virtualized environments compared to non-virtualized environments. Extreme overcommitment situations can lead to an overall performance loss up to 10%. This work shows the first results from a quantitative analysis.
Download

Paper Nr: 40
Title:

INTELLIGENT DECISION-MAKING TIER IN SOFTWARE ARCHITECTURES

Authors:

S. Khaddaj, Souheil Khaddaj and David Ong

Abstract: Despite recent developments there are still many challenges in the application of intelligent control systems, as intelligent decision-making providers in constructing many software architectures which are required in many applications particularly in service-oriented computing. In this work we are particularly interested with the use of intelligent decision making mechanisms for the management of software architectures. Our research aims at designing and developing an intelligent tier which allows dynamic system architecture configuration and provisioning. The tier is based on a number of logical reasoning and learning processes which use historical data and a set of rules for its analysis and diagnosis in its attempt to offer a solution to a particular problem.
Download

Paper Nr: 55
Title:

An artifact-based architecture for a better flexibility of business processes

Authors:

Mounira Zerari and Mahmoud Boufaida

Abstract: Workflow management Technology has been applied in many enterprise information systems. Business processes provide a means of coordinating interaction between workers and organization in a structured way. However, traditional information systems struggle with requirement to provide flexibility due to the dynamic nature of the modern business environment. Accordingly, adaptive Process Management Systems (PMSs) have emerged for providing some flexibility by enabling dynamic process change during run time. There are various ways in which flexibility can be achieved. One of these kinds of flexibility is flexibility by underspecification. This kind of flexibility is not supported (except YAWL) by current products. In addition, the approaches that currently exist not consider the context of execution of business process management. In this paper we propose an approach that supports flexibility by underspecification and consider the context of the business process execution in a runtime environment. The main idea is to consider activities as independent part of process. Each activity is encapsulated in an entity(artifact). The decision of which an activity (module, component) will be executed depends on execution context and execution conditions. We will reason about the decision taken. We are motivated by make business processes easy to put together from reusable components and to reason on context execution.
Download

Paper Nr: 93
Title:

REUSING APPROACH FOR SOFTWARE PROCESSES BASED ON SOFTWARE ARCHITECTURES.

Authors:

Fadila Aoussat, Mourad Oussalah and Mohamed A. Nacer

Abstract: Capitalizing and reusing the knowledge in the field of software process engineering is the objective of this work. In order to ensure a high quality for software process models, regarding to the specific needs of new development techniques and methods, we propose an approach based on two essential points: The Capitalization of the knowledge through a domain ontology, and the reusing of this knowledge across handling software process models as software architectures.
Download

Paper Nr: 172
Title:

BALANCED TESTING SCORECARD: A model for evaluating and improving the performance of software testing houses

Authors:

Renata Alchorne, Rafael Nóbrega, Gibeon Aquino, Silvio Romero De Lemos Meira and Alexandre Vasconcelos

Abstract: Many companies have invested in the process of testing to improve the test team's performance. Although Testing Maturity Models aim at tackling this issue, they are unpopular among the many of the highly competitive and innovative companies because they encourage displacing the true goals of the mission of “achieving a higher maturity level”. Moreover, they generally have the effect of blinding an organization to the most effective use of its resources, as they focus only on satisfying the requirements of the model. This article defines the Balanced Testing Scorecard (BTSC) model. This aims to evaluate and improve the test team’s performance. The model, based on Balanced Scorecard and Testing Maturity Models, is capable of aligning clients’ and financial objectives with testing maturity goals in order to improve the test team's performance and client’s performance. The model was based on and developed from the specialized literature and was applied in a software testing house as a case study.
Download

Paper Nr: 211
Title:

IMPROVING DATA QUALITY IN DATA WAREHOUSING APPLICATIONS

Authors:

Taoxin Peng, Lin Li and Jessie Kennedy

Abstract: Today, data plays an important role in people’s daily activities. With the help of data warehousing applications such as decision support systems and customer relationship management systems, useful information or knowledge could be derived from large quantities of data. However, investigations show that many such applications fail to work successfully. There is a growing awareness that high quality of data is a key to today’s business success and dirty data that exits within data sources is one of the reasons that cause poor data quality. To ensure high quality, enterprises need to have a process, methodologies and resources to monitor and analyze the quality of data, methodologies for preventing and/or detecting and repairing dirty data. However in practice, detecting and cleaning all the dirty data that exists in all data sources is quite expensive and unrealistic, the cost of cleaning dirty data needs to be considered for most of enterprises. Therefore conflicts may arise if an organization intends to clean their databases in that how do they select the most important data to clean based on their business requirements. In this paper, business rules are used to classify dirty data types based on data quality dimensions. The proposed method will be able to help to solve this problem by allowing users to select the appropriate group of dirty data types based on the priority of their business requirements. It also provides guidelines for measuring the data quality with respect to different data quality dimensions and also will be helpful for the development of data cleaning tools.
Download

Paper Nr: 220
Title:

A METRIC FOR RANKING HIGH DIMENSIONAL SKYLINE QUERIES

Authors:

Graciela Perera, Marlene Gonçalves and Graciela Perera

Abstract: Skyline queries have been proposed to express user’s preferences. Since the size of Skyline set increases as the number of criteria augments, it is necessary to rank high dimensional Skyline queries. In this work, we propose a new metric to rank high dimensional Skylines which allow to identify the k most interesting elements from the Skyline set (Top-k Skyline). We have empirically studied the variability and performance of our metric. Our initial experimental results show that the metric is able to speed up the computation of the Top-k Skyline in up to two orders of magnitude w.r.t. the state-of-the-art metric: Skyline Frequency.
Download

Paper Nr: 225
Title:

SOFTWARE QUALITY MANAGEMENT FLOSS TOOLS EVALUATION

Authors:

María A. Perez, Edumilis M. Ortega, Kenyer Dominguez, Luis E. Mendoza and Cynthia De Oliveira

Abstract: Software Quality Management Tool (SQM) selection can be a quite challenge, as well as the precise definition of the functionalities that every tool of this kind should offer. On the other hand, establishing standards adequate to the FLOSS context, involves analyzing both commercial and proprietary software development paradigms in contrast to the FLOSS philosophy. This article presents the evaluation of 6 different tools (four proprietary and two FLOSS tools) through the application a Systemic Software Quality Model (MOSCA) instantiation according to the SQM tools context, aiming to illustrate an accurate method for selecting tools of this nature and identifying areas for improvement for each one of the evaluated tools.
Download

Paper Nr: 227
Title:

MANAGEMENT OF RISK IN ENVIRONMENT OF DISTRIBUTED SOFTWARE DEVELOPMENT - Results of the evaluation of a model for management of risk in distributed software projects

Authors:

Cirano Campos and Jorge Audy

Abstract: The objective of this article is to present the results of the evaluation of a model of management of risk for organizations that work with distributed software development – GeRDDoS. The model proposes the administration of risks properly aligned among the global unit (head office) and the distributed unit (branch) executor of the project, emphasizing that the success of the project depends on the success of the actions executed in both units. The model is an extension of the proposal of management of risks of the Software Engineering Institute – SEI, which shows a continuous and interactive process for the administration of risks, supported by coordination and communication process during the whole life cycle of the project. In that article the results of the application of the proposed model are presented, through the analysis of the results of a case study on the application of the model in a company for distributed software development located in Brazil.
Download

Paper Nr: 240
Title:

A framework for supporting collaborative works in RIA by aspect oriented approach

Authors:

Hiroaki Fukuda

Abstract: This paper presents AspectFX, a novel approach to enabling developers and designers to collaborate effectively in RIA development. Unlike traditional web applications, RIAs are implemented by a number of developers and designers; therefore it is reasonable to divide an application into modules and assign them to developers and designers, and collaborative works among them have been important. MVC architecture and OOP helps to divide an application into functional units as modules and bring efficiency to development processes. To play these modules as a single application, developers have to describe method invocations to utilize functionalities implemented in modules, however, developers need to describe additional method invocations that are not primary tasks for them. These additional method invocations make the dependencies among modules strong and these dependencies make it inefficient/difficult to implement and maintain an application. This paper describes the design and implementation of AspectFX that introduces aspect-oriented concept and considers the additional method invocations as cross-cutting concerns. AspectFX provides methods to separate the cross-cutting concerns from primary concerns and weaves them at runtime for playing them as an application.
Download

Paper Nr: 267
Title:

TOWARDS AUTOMATIC GENERATION OF APPLICATION ONTOLOGIES

Authors:

Eveline Sacramento, Vânia P. Vidal, Jose A. Macedo, Bernadette Loscio, Fernanda R. Lopes, Marco A. Casanova and Fernando Lemos

Abstract: In the Semantic Web, domain ontologies can provide the necessary support for linking together a large number of heterogeneous data sources. In our proposal, these data sources are describe as local ontologies using an ontology language. Then, each local ontology is rewritten as an application ontology, whose vocabulary is restricted to be a subset of the vocabulary of the domain ontology. Application ontologies enable the identification and the association of semantically corresponding concepts, so they are useful for enhancing tasks like data discovery and integration. The main contribution of this work is a strategy to automatically generate such application ontologies and mappings, considering a set of local ontologies, a domain ontology and the result of the matching between each local ontology and the domain ontology.
Download

Paper Nr: 280
Title:

A UNIFIED WEB-BASED FRAMEWORK FOR JAVA CODE ANALYSIS AND EVOLUTIONARY AUTOMATIC TEST-CASES GENERATION

Authors:

Anastasis Sofokleous, Andreas S. Andreou and Panayiotis Petsas

Abstract: This paper describes the implementation and integration of code analysis and testing systems in a unified web-enabled framework. The former analyses basic programs written in Java and constructs the control-flow, data-flow and dependence graph(s), whereas the testing system collaborates with the analysis system to automatically generate and evaluate test-cases with respect to control flow and data flow criteria. The current online framework supports the edge/condition and k-tuples criteria, which utilize the control flow and data flow graphs, respectively. The present work describes the design and implementation details of the framework and compares its unique features against other similar frameworks. In addition, it presents preliminary experimental results that show that the framework can be used for a variety of applications, including unit testing of large scale programs.
Download

Paper Nr: 419
Title:

ERP INTEGRATION PROJECT : Assessing impacts on organisational agents throught a change management approach

Authors:

Clement Perotti, Perotti Clément, Stéphanie Minel, Benoît Roussel and Renaud Jean

Abstract: This paper investigates the effects of a large ERP deployment project on the organizational agents who use it within the framework of their activities. In this article, we first present some material showing that, in our case study, this project aims to standardize the company’s Information System (IS), and represents a change on both individual and organizational levels. Second, we go into detail of project management and more specifically, of change management within the framework of projects. Third, we advance some argument showing that “structured” change management approaches could be an efficient way to make project team deal with individual change in order to succeed in ERP deployment.
Download

Paper Nr: 435
Title:

SYSTEM FOR STORAGE AND MANAGEMENT OF EEG/ERP EXPERIMENTS – GENERATION OF ONTOLOGY

Authors:

Roman Moucek and Petr Ježek

Abstract: This paper shortly describes the system, which provides the possibility to store and manage data and metadata from EEG/ERP experiments. The system is planned to be registered as a source of neuroscience data and metadata. It is one of the reasons we need to provide the system ontology. The scientific papers often describe the domain by using a semantic web language and consider this kind of domain modelling as a crucial point of software solution. However, real software applications use up the underlying data structures such as relational database and object classes. That is why the fundamental differences in semantics between common data structures (relational database, object oriented code) were summarized. The existing tools in semantic web domain were studied and partially tested. The first transformations from the system relational database and object oriented code were performed.
Download

Area 2 - Artificial Intelligence and Decision Support Systems

Full Papers
Paper Nr: 35
Title:

KBE TEMPLATE UPDATE PROPAGATION SUPPORT Ontology and algorithm for update sequence computation

Authors:

Olivier Kuhn, Thomas Dusch, Parisa Ghodous and Pierre Collet

Abstract: This paper presents an approach to support Knowledge-Based Engineering template update propagation. Our aim is to provide engineers with a sequence of documents, giving the order in which they have to be processed to update them. To be able to compute a sequence, we need information about templates, Computer-Aided Design models and their relations. We designed an ontology for this purpose that will, after inferring new knowledge, provide a comprehensive knowledge about the templates and assemblies. This information is then used by a ranking algorithm that we have developed, which provides the sequence to follow to be able to update models efficiently without a deep analysis of the dependencies. This will prevent mistakes and save time as the analysis and choices are automatically computed.
Download

Paper Nr: 62
Title:

Coordinating Evolution: Designing a Self-Adapting Distributed Genetic Algorithm

Authors:

Nikolaos Chatzinikolaou and Nikolaos Chatzinikolaou

Abstract: In large scale optimisation problems, the aim is to find near-optimal solutions in very large combinatorial spaces. This learning/optimisation process can be aided by parallelisation, but it normally is difficult for engineers to decide in advance how to split the task into appropriate segments attuned to the agents working on them. This paper chooses a particular style of algorithm (a form of genetic algorithm) and describes a framework in which the parallelisation and tuning of the multi-agent system is performed automatically using a combination of self-adaptation of the agents plus sharing of negotiation protocols between agents. These GA agents are optimised themselves through the use of an evolutionary process of selection and recombination. Agents are selected according to the fitness of their respective populations, and during the recombination phase they exchange individuals from their population as well as their optimisation parameters, which is what lends the system its self-adaptive properties. This allows the execution of “optimal optimisations” without the burden of tuning the evolutionary process by hand. The architecture we use has been shown to be capable of operating in peer to peer environments, raising confidence in its scalability through the autonomy of its components.
Download

Paper Nr: 75
Title:

IMPROVEMENT OF DIFFERENTIAL CRISP CLUSTERING USING ANN CLASSIFIER FOR UNSUPERVISED PIXEL CLASSIFICATION OF SATELLITE IMAGE

Authors:

Indrajit Saha, Ujjwal Maulik, Sanghamitra Bandyopadhyay and Dariusz Plewczynski

Abstract: An important approach to unsupervised pixel classification in remote sensing satellite imagery is to use clustering in the spectral domain. In particular, satellite images contain landcover types some of which cover significantly large areas, while some (e.g., bridges and roads) occupy relatively much smaller regions. Detecting regions or clusters of such widely varying sizes presents a challenging task. This fact motivated us to present a novel approach that integrates a differential evaluation based crisp clustering scheme with artificial neural networks (ANN) based probabilistic classifier to yield better performance. Real-coded encoding of the cluster centres is used for the differential evaluation based crisp clustering. The clustered solution is then used to find some points based on their proximity to the respective centres. The ANN classifier is thereafter trained by these points. Finally, the remaining points are classified using the trained classifier. Results demonstrating the effectiveness of the proposed technique are provided for several synthetic and real life data sets. Also statistical significance test has been performed to establish the superiority of the proposed technique. Moreover, one remotely sensed image of Bombay city has been classified using the proposed technique to establish its utility.
Download

Paper Nr: 140
Title:

ConTask - Using Context-Sensitive Assistance to Improve Task-Oriented Knowledge Work

Authors:

Heiko Maus, Andreas Dengel, Jan Haas and Sven Schwarz

Abstract: The paper presents an approach to support knowledge-intensive tasks with a context-sensitive task management system that is integrated into the user's personal knowledge space represented in the Nepomuk Semantic Desktop. The context-sensitive assistance is based on the combination of user observation, agile task modelling, automatic task prediction, as well as elicitation and proactive delivery of relevant information items from the knowledge worker's personal knowledge space.
Download

Paper Nr: 146
Title:

APPLICATION OF SOFM CLUSTERING TECHNIQUE TO IDENTIFY OF AREAS WITH SIMILAR WIND PATTERNS

Authors:

José Carlos Palomares, Juan-Jose Gonzalez De La Rosa, Jose-Gabriel Ramiro Leo and Agustín Agüera

Abstract: In this paper it is shown a process to demarcate areas with analogous wind conditions. For this purpose a dispersion graph between wind directions will be traced for all stations placed in the studied zone. These distributions will be compared among themselves using the centroids extracted with SOFM algorithm. This information will be used to build a matrix, letting us work with all relations simultaneously. By permutation of elements in this matrix it is possible to group relationed stations.
Download

Paper Nr: 191
Title:

VISUAL TREND ANALYSIS Ontology Evolution Support for the Trend Related Industry Sector

Authors:

Jessica Huster, Jessica Huster and Andreas Becks

Abstract: Ontologies are used as knowledge bases to exchange, extract and integrate information in information retrieval and search. They provide a shared and common understanding that reaches across people and application systems. In reality, domain specific and technical knowledge evolve over time, and so must ontologies. Creative domains, as for example the home textile industry, are representatives for quickly evolving domains. In this domain it is also important to provide methodologies for the visualisation of knowledge evolution. In this paper we report on our ontology-based trend analysis tool, which supports marketing experts and designers to identify trend drifts, and to compare the analysis results against the ontology. Furthermore means to adapt and evolve the ontology in accordance with the changing domain are provided.
Download

Paper Nr: 204
Title:

TOWARDS THE FORMALISATION OF THE TOGAF CONTENT METAMODEL USING ONTOLOGIES

Authors:

Aurona Gerber, Paula Kotze and Alta van der Merwe

Abstract: Metamodels are abstractions that are used to specify characteristics of models. Such metamodels are gen- erally included in specifications or framework descriptions. A metamodel is for instance used to inform the generation of enterprise architecture content in the Open Group’s TOGAF 9 Content Metamodel description. However. the description of metamodels is usually done in an ad-hoc manner with customised languages and this often results in ambiguities and inconsistencies. We are concerned with the question of how the quality of metamodel descriptions, specifically within the enterprise architecture domain, could be enhanced. There- fore we investigated whether formal ontology technologies could be used to enhance metamodel construction, specification and design. For this research, we constructed a formal ontology for the TOGAF 9 Content Meta- model, and in the process, gained valuable insight into metamodel quality. In particular, the current Content Metamodel contains ambiguities and inconsistencies, which could be eliminated using ontology technologies. In this paper we argue for the integration of formal ontologies and ontology technologies as tools into meta- model construction and specification. Ontologies allow for the construction of complex conceptual models, but more significant, ontologies can assist an architect by depicting all the consequences of a model, allowing for more precise and complete artifacts within enterprise architectures, and because these models use standardized languages, they should promote integration and interoperability.
Download

Paper Nr: 205
Title:

THESAURUS BASED SEMANTIC REPRESENTATION IN LANGUAGE MODELING FOR MEDICAL ARTICLE INDEXING

Authors:

Jihen Majdoubi, Mohamed Tmar and Faiez Gargouri

Abstract: Language modeling approach plays an important role in many areas of natural language processing including speech recognition, machine translation, and information retrieval. In this paper, we propose a contribution for conceptual indexing of medical articles by using the MeSH (Medical Subject Headings) thesaurus, then we propose a tool for indexing medical articles called SIMA (System of Indexing Medical Articles) which uses a language model to extract the MeSH descriptors representing the document. To assess the relevance of a document to a MeSH descriptor, we estimate the probability that the MeSH descriptor would have been generated by language model of this document.
Download

Paper Nr: 248
Title:

EXTENDING MAS-ML TO MODEL PROACTIVE AND REACTIVE SOTWARE AGENTS

Authors:

Enyo T. Gonçalves, Mariela Cortés, Viviane Torres da Silva and Gustavo Campos

Abstract: The existence of Multi Agent System (MAS) where agents with different internal architectures interact to achieve their goals promotes the need for a language capable of modeling these applications. In this context we highlight MAS-ML, a MAS modeling language that performs a conservative extension of UML while incorporating agent-related concepts. Nevertheless MAS-ML was developed to support pro-active agents. This paper aims to extend MAS-ML to support the modelling of not only proactive but also reactive agents based on the architectures described in the literature.
Download

Paper Nr: 257
Title:

REFINING THE TRUSTWORTHINESS ASSESSMENT OF SUPPLIERS THROUGH EXTRACTION OF STEREOTYPES

Authors:

Maria J. Urbano, Ana Paula Rocha and Eugénio Oliveira

Abstract: Trust management is nowadays considered a promising enabler technology to extend the automation of the supply chain to the search, evaluation and selection of suppliers located world-wide. Current agent-based Computational Trust and Reputation (CTR) systems concern the representation, dissemination and aggregation of trust evidences for trustworthiness assessment, and some recent proposals are moving towards situation-aware solutions that allow the estimation of trust when the information about a given supplier is scarce or even null. However, these enhanced, situation-aware proposals rely on ontology-like techniques that are not fine grained enough to detect light, but relevant, tendencies on supplier’s behaviour. In this paper, we propose a technique that allows the extraction of positive and negative tendencies of suppliers in the fulfilment of established contracts. This technique can be used with any of the existing “traditional” CTR systems, improving their ability in selectively selecting a partner based on the characteristics of the situation in evaluation. In this paper, we test our proposal using an aggregation engine that embeds important properties of the dynamics of trust building.
Download

Paper Nr: 260
Title:

FONTE: A Protege Plugin for Engineering of Complex Ontologies by Assembling Modular Ontologies of Space, Time and Domain Concepts

Authors:

Jorge Santos, Luis Braga and Anthony Cohn

Abstract: Humans presents a natural ability to reason about scenarios including spatial and temporal information but due to several reasons the process of developing complex ontologies including time and/or space is still not well developed and it remains a one-off, labour intensive experience. In this paper we present FONTE (Factorising ONTology Engineering complexity), an ontology engineering methodology that relies on a divide and conquer strategy. The targeted complex ontology will be built by assembling modular ontologies that capture temporal, spatial and domain (atemporal and aspatial) aspects. In order to support the proposed methodology we developed a plugin for Protégé, one of the most widely used open source ontology editor and knowledge-base framework.
Download

Paper Nr: 275
Title:

AN AGENT-BASED APPROACH TO CONSUMER´S LAW DISPUTE RESOLUTION

Authors:

Nuno Costa, Davide Carneiro, Paulo Novais, Diovana Barbieri and Francisco Andrade

Abstract: Buying products online results in a new type of trade which the traditional legal systems are not ready to deal with. Besides that, the increase in B2C relations led to a growing number of consumer claims and many of these are not getting a satisfactory response. New approaches that do not include traditional litigation are needed, having in consideration not only the slowness of the judicial system, but also the cost/beneficial relation in legal procedures. This paper points out to an alternative way of solving these conflicts online, using Information Technologies and Artificial Intelligence methodologies. The work here presented results in a consumer advice system, which fastens and makes easier the conflict resolution process both for consumers and for legal experts.
Download

Paper Nr: 296
Title:

Mining Timed Sequences with TOM4L Framework

Authors:

Nabil Benayadi and Marc Le Goc

Abstract: We introduce the problem of mining sequential patterns among timed messages in large database of sequences using a Stochastic Approach. For example, let us consider a database recording sensors values of a car at a given time. An example of patterns we are interested in is : 50% of cases of engine stops in the car are happened between 0 and 2 minutes after observing a lack of the gas in the engine, produced between 0 and 1 minutes after the fuel tank is empty. We call this patterns ``signatures}'. Previous research have considered some equivalent patterns, but such work have three mains problems : (1) the sensibility of their algorithms with the value of their parameters, (2) too large number of discovered patterns, and (3) their discovered patterns consider only "after`` relation (succession in time) and omit temporal constraints between elements in patterns. To address this issue, we present TOM4L process (Timed Observations Mining for Learning process) which uses a stochastic representation of a given set of sequences on which an inductive reasoning coupled with an abductive reasoning is applied to reduce the space search. A very simple example is used to illustrate the problems of previous approaches. The results obtained with an application on very complex real world system monitored with a large scale knowledge based system, the Sachem system of the Arcelor-Mittal Steel group, are also presented to show the operational character of the TOM4L process.
Download

Paper Nr: 336
Title:

STATISTICAL ASSOCIATIVE CLASSIFICATION OF MAMMOGRAMS: THE SACMINER METHOD

Authors:

Carolina V. Watanabe, Marcela Xavier Ribeiro, Caetano Traina Jr. and Agma Traina

Abstract: In this paper, we present a new method called SACMiner for mammogram classification using statistical association rules. The method employs two new algorithms the StARMiner* and the Voting classifier (V-classifier). StARMiner* mines association rules over continuous feature values, avoiding introducing bottleneck and inconsistencies in the learning model due to a discretization step. The V-classifier decides which class best represents a test image, based on the statistical association rules mined. The experiments comparing SACMiner with other traditional classifiers in detecting breast cancer in mammograms show that the proposed method reaches higher values of accuracy, sensibility and specificity. The results indicate that SACMiner is well-suited to classify mammograms. Moreover, the proposed method has a low computation cost, being linear on the number of dataset items, when compared with other classifiers. Furthermore, SACMiner is extensible to work with other types of medical images.
Download

Paper Nr: 346
Title:

MODELING LARGE SCALE MANUFACTURING PROCESS FROM TIMED DATA: Using the TOM4L Approach and Sequence Alignment Information for Modeling STMicroelectronics’ Production Processes

Authors:

Pamela Viale, Jacques Pinaton, Nabil Benayadi and Marc Le Goc

Abstract: Modeling manufacturing process of complex products like electronic chips is crucial to maximize the quality of the production. The Process Mining methods developed since a decade aims at modeling such manufacturing process from the timed messages contained in the database of the supervision system of this process. Such process can be complex making difficult to apply the usual Process Mining algorithms. This paper proposes to apply the TOM4L Approach (Timed Observations Mined for Learning) to model large scale manufacturing processes. A series of timed messages is considered as a sequence of class occurrences and is represented with a Markov chain from which models are deduced with an abductive reasoning. Because sequences can be very long, a notion of process phase is introduced. Sequences are cut based on the concept of class of equivalence and information obtained from an appropiate alignment between the sequences considered. A model for each phase can then be locally produced. The model of the whole manufacturing process is obtained from the concatenation of the models of the different phases. This paper presents the application of this method to model STMicroelectronics’ manufacturing processes. STMicroelectronics’ interest in modeling its manufacturing processes is based on the necessity to detect the discrepancies between the real processes and experts’ definitions of them.
Download

Paper Nr: 365
Title:

A HIERARCHICAL HANDWRITTEN OFFLINE SIGNATURE RECOGNITION SYSTEM

Authors:

Ioana Barbantan, Ioana Barbantan, Camelia Lemnaru and Rodica Potolea

Abstract: This paper presents an original approach for solving the problem of offline handwritten signature recognition, and a new hierarchical, data-partitioning based solution for the recognition module. Our approach tackles the problem we encountered with an earlier version of our system when we attempted to increase the number of classes in the dataset: as the complexity of the dataset increased, the recognition rate dropped unacceptably for the problem considered. The new approach employs a data partitioning strategy to generate smaller sub-problems, for which the induced classification model should attain better performance. Each sub-problem is then submitted to a learning method, to induce a classification model in a similar fashion with our initial approach. We have performed several experiments and analyzed the behavior of the system by increasing the number of instances, classes and data partitions. We continued using the Naïve Bayes classifier for generating the classification models for each data partition. Overall, the classifier performs in a hierarchical way: a top level for data partitioning via clustering and a bottom level for classification sub-model induction, via the Naïve Bayes classifier. Preliminary results indicate that this is a viable strategy for dealing with signature recognition problems having a large number of persons.
Download

Paper Nr: 379
Title:

EVALUATING PREDICTION STRATEGIES IN AN ENHANCED META-LEARNING FRAMEWORK

Authors:

Silviu Cacoveanu, Silviu D. Cacoveanu, Camelia Lemnaru and Rodica Potolea

Abstract: Finding the best learning strategy for a new domain/problem can prove to be an expensive and timeconsuming process even for the experienced analysts. This paper presents several enhancements to a metalearning framework we have previously designed and implemented. Its main goal is to automatically identify the most reliable learning schemes for a particular problem, based on the knowledge acquired about existing data sets, while minimizing the work done by the user but still offering flexibility. The main enhancements proposed here refer to the addition of several classifier performance metrics, including two original metrics, for widening the evaluation criteria, the addition of several new benchmark data sets for improving the outcome of the neighbor estimation step, and the integration of complex prediction strategies. Systematic evaluations have been performed to validate the new context of the framework. The analysis of the results revealed new research perspectives in the meta-learning area.
Download

Paper Nr: 439
Title:

Effective Analysis of Flexible Collaboration Processes by way of Abstraction and Mining Techniques

Authors:

Alfredo Cuzzocrea, Luigi Pontieri and Francesco Folino

Abstract: A knowledge-based framework for supporting and analyzing loosely-structured collaborative processes (LSCPs) is presented in this paper. The framework takes advantages from a number of knowledge representation, management and processing capabilities, including recent process mining techniques. In order to support the enactment, analysis and optimization of LSCPs in an Internet-worked virtual scenario, we illustrate a flexible integration architecture, coupled with a knowledge representation and discovery environment, and enhanced by ontology-based knowledge processing capabilities. In particular, an approach for restructuring logs of LSCPs is proposed, which allows to effectively analyze LSCPs at varying abstraction levels with process mining techniques (originally devised to analyze well-specified and well-structured workflow processes). The capabilities of the proposed framework were experimentally tested on several application contexts. Interesting results that concern the experimental analysis of collaborative manufacturing processes across a distributed CAD platform are shown
Download

Short Papers
Paper Nr: 14
Title:

Diagnosis of active systems by lazy techniques

Authors:

Gianfranco Lamperti and Marina Zanella

Abstract: In society, laziness is generally considered as a negative feature, if not a capital fault. Not so in computer science, where lazy techniques are widespread, either to improve efficiency or to allow for computation of unbounded objects, such as infinite lists in modern functional languages. We bring the idea of lazy computation to the context of model-based diagnosis of active systems. Up to a decade ago, all approaches to diagnosis of discrete-event systems required the generation of the global system model, a technique that is impractical when the system is large and distributed. To overcome this limitation, a lazy approach was then devised in the context of diagnosis of active systems, which works with no need for the global system model. However, a similar drawback arose a few years later, when uncertain temporal observations were proposed. In order to reconstruct the system behavior based on an uncertain observation, an index space is generated as the determinization of a nondeterministic automaton derived from the graph of the uncertain observation, the prefix space. The point is that the prefix space and the index space suffer from the same computational difficulties as the system model. To confine the explosion of memory space when dealing with diagnosis of active systems with uncertain observations, a laziness-based, circular-pruning technique is presented. Experimental results offer evidence for the considerable effectiveness of the approach, both in space and time reduction.
Download

Paper Nr: 15
Title:

Algorithm applied in the prioritization of the stakeholders

Authors:

Luciano Paula and Anna M. Gil-Lafuente

Abstract: According to scientific studies, relationships with all stakeholders and addressing all issues related to sustainability in business is neither possible nor desirable. The company should seek to establish an order of priorities for the stakeholders and issues to ensure good management of time, resources and expectations. Based on the Theory of Stakeholders discuss the importance of management with stakeholders in the pursuit of sustainability in enterprises. In this paper we will focus our research on the prioritization of the stakeholders through an analysis of an empirical study by a consulting firm in Brazil. In this case, the company needs to establish priorities for stakeholders. To achieve this objective, the consultant hired has used fuzzy logic algorithm, applying the P-Latin composition. To complete the study, we present the contributions, the empirical results and conclusions of our investigation.
Download

Paper Nr: 37
Title:

Classification of market news and prediction of market trends

Authors:

Petr Kroha and Robert Nienhold

Abstract: In this contribution we present our results concerning the influence of news on market trends. We processed stock news with a grammar-driven classification. We found some potentialities of market trend prediction and present promising experimental results. As a main result we present the fact that there are two points (altogether only two) representing the minimum of good news/bad news relation for two long-term trends breaking points (altogether only two) during the last 10 years and that both of these points representing news appear in both cases about six months before the long-term trend of markets changed.
Download

Paper Nr: 100
Title:

USING KEY PERFORMANCE INDICATORS TO FACILITATE THE STRATEGY IMPLEMENTATION AND BUSINESS PROCESS IMPROVEMENT IN SME’s

Authors:

Miguel Merino and DAVID NEILA

Abstract: The use of metrics for a wide range of issues in business management is usual in big organizations but don’t use frequently in business in small size enterprises. Business Intelligence (BI) is seen largely as the domain of large organizations, yet any size business can benefit greatly from BI tools. To date, the cost of BI technology has been too high for small businesses and limited scope of market operations and financial reports. We develop a method for successful BI implementation in the SME'S on the basis of its performance indicators based upon business process monitoring and constructing a plan of action (Business Initiatives) to carry out a defined strategy.
Download

Paper Nr: 105
Title:

CONTEXT-POLICY-CONFIGURATION: Paradigm of Intelligent Autonomous System Creation.

Authors:

Oleksiy Khriyenko, Oleksiy Khriyenko, Sergiy Nikitin and Vagan Terziyan

Abstract: Next generation of integration systems will utilize different methods and techniques to achieve the vision of ubiquitous knowledge: Semantic Web and Web Services, Agent Technologies and Mobility. Nowadays, unlimited interoperability and collaboration are the important things for industry, business, education and research, health and wellness, and other areas of people life. All the parties in a collaboration process have to share data as well as information about actions they are performing. During the last couple of years, policies have gained attention both in research and industry. Policies are considered as an appropriate means for controlling the behaviour of complex systems and are used in different domains for automating system administration tasks (configuration, security, or Quality of Service (QoS) monitoring and assurance). The paper presents Semantic Web driven approach for context-aware policy-based system configuration. Proposed Context-Policy-Configuration approach for creation of intelligent autonomous systems allows system behaviour modification without source code change or requiring information about the dependencies of the components being governed. The system can continuously be adjusted to externally imposed constraints by policy determination and change.
Download

Paper Nr: 144
Title:

Normalization procedures on Multicriteria Decision Making: an example on environmental problems

Authors:

Estefanía García-Vázquez, Joaquín Pérez Navarro and Ethel Mokotoff

Abstract: In Multicriteria Decision Making, a normalization procedure is required to conduct the aggregation process. Even though this methodology is widely applied in strategic decision support systems, scarce published papers detail this specific question. In this paper we analyze the results of the influence of normalization procedures in the weight sum aggregation method in Multicriteria Decision problems devoted to sustainable development.
Download

Paper Nr: 153
Title:

A Review of Learning Methods Enhanced in Strategies of Negotiating Agents

Authors:

Marisa Masvoula, Panagiotis Kanellis and Drakoulis Martakos

Abstract: Advancement of Artificial Intelligence has contributed in the enhancement of agent strategies with learning techniques. We provide an overview of learning methods that form the core of state-of-the art negotiators. The main objective is to facilitate the comprehension of the domain by framing current systems with respect to learning objectives and phases of application. We also aim to reveal current trends, virtues and weaknesses of applied methods.
Download

Paper Nr: 156
Title:

Male and Female Chromosomes in Generic Algorithms

Authors:

Ghodrat Moghadampour

Abstract: Evolutionary algorithms work on randomly generated populations, which are converged over runs toward the desired optima. Randomly generated populations are of different qualities based on their average fitness values. In many cases switching all bits of a randomly generated binary individual to their opposite values might quickly produce a better individual. This technique increases diversity among individuals in the population and allows exploring the search space in a more rigorous way. In this research the effect of such operation during the initialization of the population and crossover operator has been investigated. Experimentation with 44 test problems in 2200 runs showed that this technique can facilitate producing better individuals on average in around 32% of cases.
Download

Paper Nr: 157
Title:

EFFICIENT LEARNING OF DYNAMIC BAYESIAN NETWORKS FROM TIMED DATA

Authors:

Ahmad Ahdab and Marc Le Goc

Abstract: This paper addresses the problem of learning a Dynamic Bayesian network from timed data without prior knowledge to the system. One of the main problems of learning a Dynamic Bayesian network is building and orienting the edges of the network avoiding loops. The problem is more difficult when data are timed. This paper proposes an algorithm based on an adequate representation of a set of sequences of timed data and uses an information based measure of the relations between two edges. This algorithm is a part of the Timed Observation Mining for Learning (TOM4L) process that is based on the Theory of the Timed Observations. The paper illustrates the algorithm with an application on the Apache system of the Arcelor-Mittal Steel Group, a real world knowledge based system that diagnoses a galvanization bath.
Download

Paper Nr: 170
Title:

DecisionWave: Embedding Collaborative Decision Making Into Business Processes

Authors:

Christoph Krammer and Alexander Mädche

Abstract: DecisionWave is a platform to allow distributed teams to jointly take decisions within business processes. The focus of this work is to provide the conceptual architecture and a prototype for a seamless integration of communication support for group decisions into existing business processes within operational information systems. The DecisionWave system can then be used to document the decision process and generate templates for future decisions of the same type. Together with the architecture, we built a prototype using the salesforce.com CRM system and the Google Wave communication infrastructure.
Download

Paper Nr: 171
Title:

TRAINING A FUZZY SYSTEM IN WIND CLIMATOLOGIES DOWNSCALING

Authors:

Agustín Agüera, Juan-Jose Gonzalez De La Rosa, Jose-Gabriel Ramiro Leo and José Carlos Palomares

Abstract: The wind climate measured in a point is usually described as the result of a regional wind climate forced by local effects derived from topography, roughness and obstacles in the surrounding area. This paper presents a method that allows to use fuzzy logic to generate the local wind conditions caused by these geographic elements. The fuzzy systems proposed in this work are specifically designed to modify a regional wind frequency rose attending to the terrain slopes in each direction. In order to optimize these fuzzy systems, Genetic Algorithms will act improving an initial population and, eventually, selecting the one which produce the best aproximation to the real measurements.
Download

Paper Nr: 177
Title:

AN INTELLIGENT FRAMEWORK FOR AUTOMATIC EVENT DETECTION IN ROBOTIC SOCCER GAMES

Authors:

João Portela, João Portela, Pedro Abreu, Luís P. Reis, Eugénio Oliveira and Julio Garganta

Abstract: In soccer, the level of performance is determined by a number of a complex variables interrelated: technique, tactics, psychological factors and finally, fitness. Because of this, analyzing this information in a real-time, even for soccer experts like professional coaches has become an impossible task. Automatic event detection tools occupy an important role in this reality, although nowadays there isn't any tool capable of producing information capable of helping a professional coach choosing his team strategy for a specific game. In this research project an automatic event detection tool is purposed and, a set of game statistics defined by a group of sports researchers. All the teams present in the 2009 RoboCup tournament have a pass success rate superior to 65\%. These statistics provide an interesting viewpoint on how to evaluate a team performance, such as the importance of dominating the opposing team field without losing the control of our own (this can be seen on the top 3 zone dominance statistics). In the future this project will serve as a base for building a Framework capable of simulating a match between two heterogeneous soccer teams and produce reliable information for optimizing the team performance.
Download

Paper Nr: 196
Title:

AUTOMATIC SEARCH-BASED TESTING WITH THE REQUIRED K-TUPLES CRITERION

Authors:

Andria Krokou, Anastasis Sofokleous, Andria Krokou and Andreas S. Andreou

Abstract: This paper examines the use of data flow criteria in software testing and uses evolutionary algorithms to automate the generation of test data with respect to the required k-tuples criterion. The proposed approach is incorporated into an existing test data generation framework consisting of a program analyzer and a test data generator. The former analyses JAVA programs, creates control and data flow graphs, generates paths in relation to data flow dependencies, simulates test cases execution and determines code coverage on the control flow graphs. The test data generator takes advantage of the program analyzer capabilities and generates test cases by utilizing a series of genetic algorithms. The performance of the framework is compared to similar methods and evaluated using both standard and randomly generated JAVA programs. The preliminary results demonstrate the efficacy and efficiency of this approach.
Download

Paper Nr: 198
Title:

SENTENCE SIMILARITY MEASURES TO SUPPORT WORKFLOW EXCEPTION HANDLING

Authors:

Aibrahim Aldeeb, Martin Stanton, Keeley Crockett and David Pearce

Abstract: Exceptions occurrence in workflow systems is common. Searching in the past exceptions handlers’ records, looking for any similar exception serves as good sources in designing the solution to resolve the exception at hand. In the literature, there are three approaches to retrieve similar workflow exception records from the knowledge base. These approaches are keyword-based approach, concept hierarchies approach and pattern matching retrieval system. However, in a workflow domain, exceptions are often described by workflow participants as a short text using natural language rather than a set of user-defined keywords. Therefore, the above mentioned approaches are not effective in retrieval of relevant information. The proposed approach considers the semantic similarity between the workflow exceptions rather than term-matching schemes, taking account of semantic information and word order information implied in the sentence. Our findings show that sentence similarity measures are capable of supporting the retrieval of relevant information in workflow exception handling knowledge. This paper presents a novel approach to apply sentence similarity measures within the case-based reasoning methodology in workflow exception handling. A data set, comprising of 76 sentence pairs representing instance level workflow exceptions are tested and the results show significant correlation between the automated similarity measures and the human domain expert intuition.
Download

Paper Nr: 207
Title:

PERFORMANCE GAIN FOR CLUSTERING WITH GROWING NEURAL GAS USING PARALLELIZATION METHODS

Authors:

Alexander Adam, Alexander Adam, Sascha Dienelt, Wolfgang Benn and Sebastian Leuoth

Abstract: The amount of data in databases is increasing steadily. Clustering this data is one of the common tasks in Knowledge Discovery in Databases (KDD). For KDD purposes, this means that many algorithms need so much time, that they become practically unusable. To counteract this development, we try parallelization techniques on that clustering. Recently, new parallel architectures have become affordable to the common user. We investigated especially the GPU (Graphics Processing Unit) and multi-core CPU architectures. These incorporate a huge amount of computing units paired with low latencies and huge bandwidths between them. In this paper we present the results of different parallelization approaches to the GNG clustering algorithm. This algorithm is beneficial as it is an unsupervised learning method and chooses the number of neurons needed to represent the clusters on its own.
Download

Paper Nr: 210
Title:

HOURLY PREDICTION OF ORGAN FAILURE AND OUTCOME IN INTENSIVE CARE BASED ON DATA MINING TECHNIQUES

Authors:

Manuel F. Santos, Álvaro Silva, Fernando Rua, Filipe Portela and Marta V. Boas

Abstract: The use of Data Mining techniques makes possible to extract knowledge from high volumes of data. Currently, there is a trend to use Data Mining models in the perspective of intensive care to support physicians’ decision process. Previous results used offline data for the predicting organ failure and outcome for the next day. This paper presents the INTCare system and the recently generated Data Mining models. Advances in INTCare led to a new goal, prediction of organ failure and outcome for the next hour with data collected in real-time in the Intensive Care Unit of Hospital Geral de Santo António, Porto, Portugal. This experiment used Artificial Neural Networks, Decisions Trees, Logistic Regression and Ensemble Methods and we have achieved very interesting results, having proven that it is possible to use real-time data from the Intensive Care Unit to make highly accurate predictions for the next hour. This is a great advance in terms of intensive care, since predicting organ failure and outcome on an hourly basis will allow intensivists to have a faster and pro-active attitude in order to avoid or reverse organ failure.
Download

Paper Nr: 216
Title:

USING NATURAL LANGUAGE PROCESSING FOR AUTOMATIC EXTRACTION OF ONTOLOGY INSTANCES

Authors:

Rosario Girardi, Ivo Serra, Djefferson Maranhão, Rosario Girardi, Maria Macedo and Carla Faria

Abstract: Ontologies are used by modern knowledge-based systems to represent and share knowledge about an application domain. Ontology population looks for identifying instances of concepts and relationships of an ontology. Manual population by domain experts and knowledge engineers is an expensive and time consuming task so, automatic and semi-automatic approaches are needed. This article proposes an initial approach for automatic ontology population from textual sources that use natural language processing and machine learning techniques. Some experiments using a family law corpus were conducted in order to evaluate it. Initial results are promising and indicate that our approach can extract instances with good effectiveness.
Download

Paper Nr: 234
Title:

A SOFTWARE SYSTEM FOR DATA INTEGRATION AND DECISION SUPPORT FOR EVALUATION OF AIR POLLUTION HEALTH IMPACT

Authors:

Luigi Quarenghi, Federica Bargna, Daniele Toscani, Federica Bargna, Luigi Quarenghi, Francesco Archetti and Ilaria Giordani

Abstract: In this paper we present a software system for decision support (DSS – Decision Support System) aimed at forecasting high demand of admission on health care structures due to environmental pollution. The algorithmic kernel of the system is based on machine learning, the software architecture is such that both persistent and sensor data are integrated through a data integration infrastructure. Given the actual concentration of different pollutants, measured by a network of sensors, the DSS allows forecasting the demand of hospital admissions for acute diseases in the next 1 to 6 days. We tested our system on cardiovascular and respiratory diseases in the area of Milan.
Download

Paper Nr: 238
Title:

EXPERT KNOWLEDGE MANAGEMENT BASED ON ONTOLOGY IN A DIGITAL LIBRARY

Authors:

Antonio Martin and Carlos León de Mora

Abstract: The architecture of the future Digital Libraries should be able to allow any users to access available knowledge resources from anywhere and at any time and efficient manner. Moreover to the individual user, there is a great deal of useless information in addition to the substantial amount of useful information. The goal is to investigate how to best combine Artificial Intelligent and Semantic Web technologies for semantic searching across largely distributed and heterogeneous digital libraries. The Artificial Intelligent and Semantic Web have provided both new possibilities and challenges to automatic information processing in search engine process. The major research tasks involved are to apply appropriate infrastructure for specific digital library system construction, to enrich metadata records with ontologies and enable semantic searching upon such intelligent system infrastructure. We study improving the efficiency of search methods to search a distributed data space like a Digital Library. This paper outlines the development of a Case-Based Reasoning prototype system based in an ontology for retrieval information of the Digital Library University of Seville. The results demonstrate that the used of expert system and the ontology into the retrieval process, the effectiveness of the information retrieval is enhanced.
Download

Paper Nr: 242
Title:

A THREE LEVEL ABSTRACTION HIERARCHY TO REPRESENT PRODUCT STRUCTURAL INFORMATION

Authors:

Horacio Leone, Marcela Vegetti and Gabriela Henning

Abstract: Product models should integrate and efficiently manage all the information associated with products in the context of industrial enterprises or supply chains (SCs). Nowadays, it is quite common for an organization and even, each area within a company, to have its own product model. This situation leads to information duplication and its associated problems. In addition, traditional product models do not properly handle the high number of variants managed in today competitive markets. Therefore, there is a need for an integrated product model to be shared by the organizations participating in global SCs or all areas within a company. One way to reach an intelligent integration among product models is by means of an ontology. PRONTO (PRoduct ONTOlogy) is an ontology for the Product Modelling domain, able to efficiently handle product variants. This contribution presents a ConceptBase formalization of PRONTO, as well as an extension of it that allows the inference of product structural knowledge and the specification of valid products.
Download

Paper Nr: 272
Title:

Assessment of Change in Number of Neurons in the Hidden Layers of Neural Networks for Fault Identification in Electrical Systems

Authors:

Pedro G. Coelho, Joaquim Pinto Rodrigues, David da Silva and Luiz Biondi Neto

Abstract: This work describes performance evaluation of ANNs (Artificial Neural Networks) used to identify faults in electrical systems for several number of neurons in the hidden layers. The number of neurons in the hidden layers depends on the complexity of the problem to be represented. Currently, there are no reliable rules for determining, a priori, the number of hidden neurons, so that such number depends largely on the experience of the practitioners who are designing the ANNs. This paper reports experiment results using neural networks varying the number of hidden neurons to aid the neural network user to find an adequate configuration in terms of number of neurons in the hidden layers so that ANNs be more efficient particularly for fault identification applications.
Download

Paper Nr: 307
Title:

Swarm Intelligence for Rule Discovery in Data Mining

Authors:

Andre Britto de Carvalho, Aurora Pozo and Taylor Savegnago

Abstract: This paper aims to discuss Swarm Intelligence approaches for Rule Discovery in Data Mining. The first approach is a new rule learning algorithm based on Particle Swarm Optimization (PSO) and that uses a Multiobjective technique to conceive a complete novel approach to induce classifiers, called MOPSO-N. In this approach the properties of the rules can be expressed in different objectives and then the algorithm finds these rules in an unique run by exploring Pareto dominance concepts. The second approach, called PSO/ACO2 algorithm, uses a hybrid technique combining Particle Swarm Optimization and Ant Colony Optimization. Both approaches directly deal with continuous and nominal attribute values, a feature that current bioinspired rule induction algorithms lack. In this work, an experiment is performed to evaluated both approaches by comparing the performance of the induced classifiers.
Download

Paper Nr: 322
Title:

NARFO* ALGORITHM: OPTIMIZING THE PROCESS OF OBTAINING NON-REDUNDANT AND GENERALIZED SEMANTIC ASSOCIATION RULES

Authors:

Rafael G. Miani, Rafael G. Miani, Marilde Terezinha Prado Santos, Cristiane A. Yaguinuma and Vinícius T. Ferraz

Abstract: This paper proposes the NARFO* algorithm, an algorithm for mining non-redundant and generalized association rules based on fuzzy ontologies. The main contribution of this work is to optimize the process of obtaining non-redundant and generalized semantic association rules by introducing the minGen (Minimal Generalization) parameter in the latest version of NARFO algorithm. This parameter acts on generalize rules, especially the ones with low minimum support, preserving their semantic and eliminating redundancy, thus reducing considerably the amount of generated rules. Experiments showed that NARFO* produces semantic rules, without redundancy, obtaining 68,75% and 55,54% of reduction in comparison with XSSDM algorithm and NARFO algorithm, respectively.
Download

Paper Nr: 363
Title:

Model of Knowledge Spreading for Multi-Agent Systems

Authors:

David Oviedo Olmedo, Francisco Sivianes, Maria C. Romero Ternero, Maria Dolores Hernandez Vazquez , Alejandro Carrasco Muñoz and Jose Ignacio Escudero Fombuena

Abstract: This paper presents a model to spread knowledge in multiagent-based control systems, where simplicity, scalability, flexibility and optimization of communications system are the main goals. This model not only implies some guidelines on how the communication among different agents in the system is carried out, but also defines the organization of the elements of the system. The proposed model is applied to a control system of a solar power plant, obtaining an architecture which optimizes agents for the problem. The agents in this system can cooperate and coordinate to achieve a global goal, encapsulate the hardware interfaces and make the control system easily adapt to different requirements through configuration. The model also includes an algorithm that adds new variables in the communication among agents and enables flow control knowledge in the system.
Download

Paper Nr: 375
Title:

TREEAD: A TOOL THAT ENABLES THE RE-USE OF EXPERIENCE IN ENTERPRISE ARCHITECTURE DESCRIPTION

Authors:

Paulo Tomé

Abstract: Enterprise Architecture (EA) is an important organization issue. The EA, resulting from a development pro- cess, is an important tool in different situations. It is used as a communication tool between the systems stakeholders. It is an enabler of changes. More importantly, the EA definition allows the build of unifying or coherent forms or structures. There has been growing interest in this topic topic area in recent years. Several authors have proposed methods, frameworks, languages with the aim of helping organizations in the process of EA definition. This paper proposes a software tool that can be used to re-use experience in EA definition processes. The tool proposed is independent of the framework, method, language or software tool used in the EA description process.
Download

Paper Nr: 402
Title:

Supporting Complexity in Modeling Bayesian Troubleshooting

Authors:

Davide De Pasquale and Luigi Troiano

Abstract: Troubleshooting complex systems, such as industrial plants and machinery, is a task entailing an articulated decision making process hard to structure, and generally relying on human experience. Recently probabilistic reasoning, and Bayesian networks in particular, proved to be an effective means to support and drive decisions in Troubleshooting. However, troubleshooting a real system requires to face scalability and feasibility issues, so that the direct employment of Bayesian networks is not feasible. In this paper we report our experience in applying Bayesian approach to industrial case and we propose a methodology to decompose a complex problem in more treatable parts.
Download

Paper Nr: 404
Title:

New Approaches to Enterprise Cooperation Generation and Management

Authors:

Joerg Laessig and Trommler Ullrich

Abstract: The paper considers the problem of task specific cooperation generation and management in a network of small or medium sized enterprises. Such short-term cooperations are called virtual enterprices. So far this problem has been discussed by several authors by applying different methods from artificial intelligence as multi-agent systems, ant colony optimization, or genetic algorithms, and combinations of them. In this paper we discuss this problem from a target oriented point of view and focus on the question how it can be modeled to keep its complexity controllable by considering sequential, parallel, and non-combinatorial approaches. After describing the implementation of a cooperation generation solution as rich internet application also solutions for the management of such cooperations considering aspects as replanning are described.
Download

Paper Nr: 411
Title:

Revisiting generalized evolutionary programming with Levy-type mutations

Authors:

Jyhjeng Deng

Abstract: This paper describes the relationship between a stable process, the Levy distribution, and the Tsallis distribution. These two distributions are often confused as different versions of each other, and are commonly used as mutators in evolutionary algorithms. This study shows that they are usually different, but are identical in special cases for both normal and Cauchy distributions. These two distributions can also be related to each other. With proper equations for two different settings (with Levy’s kurtosis parameter alpha< 0.3490 and otherwise), the two distributions match well, particularly for 1<=alpha<=2 . This study also includes an exact random number generator for the Tsallis distribution to evaluate its performance in evolutionary programming. Sphere model analysis shows a strikingly different result after adjusting the two main settings in running the evolutionary programming with the Tsallis distribution. The first adjustment is to use a common scale parameter of where is the annealing temperature and the degree of non-extensivity; the second adjustment is to use an exact variate generator of the Tsallis distribution. Our experiment shows that in the Sphere model the Tsallis variate with alpha<=2.5 outperforms the classical evolutionary programming (CEP) with a normal variate as a mutator. This finding is contrary to the common myth that for single global optimization problem, the CEP approach is usually better because it is good at small perturbation. The finding of this study shed some light on this matter based on the quantile function of the Tsallis distribution.
Download

Paper Nr: 423
Title:

Supervised Learning for Agent Positioning by Using Self-Organizing Map

Authors:

Kazuma Moriyasu, Takeshi Yoshikawa and Hidetoshi Nonaka

Abstract: We propose a multi-agent cooperative method that helps each agent to cope with partial observation and reduces the number of teaching data. It learns cooperative actions between agents by using the Self-Organizing Map as supervised learning. Input Vectors of the Self-Organizing Map are the data that reflects the operator's intention. We show that our proposed method can acquire cooperative actions between agents and reduce the number of teaching data by two evaluation experiments using the pursuit problem that is one of multi-agent system.
Download

Paper Nr: 433
Title:

REPLANTING THE ANSWER GARDEN: CULTIVATING EXPERTISE THROUGH DECISION SUPPORT TECHNOLOGY

Authors:

Stephen Diasio and Núria Agell

Abstract: A growing body of literature established within the information technology field has focused on augmenting organizational knowledge and expertise. Due to increasing environmental complexity and changing technology the exogenous assumptions found within must be readdressed. Expert systems, group decision support systems, and collective intelligence tools are presented to illustrate how expertise needed in organizational decision-making is changing and may not reside within the traditional organizational boundaries. This paper suggests future research streams of how expertise can be cultivated through decision support technologies and how organizational expertise and problem-solving can be augmented reflecting the changing roles of experts and non-experts.
Download

Paper Nr: 49
Title:

A perception Mechanism for two-dimensional shapes in the virtual world

Authors:

Jong-Hee Park, Jae W. Park and Jong-Hee Park

Abstract: Lifelike agent in the virtual world is an agent who is designed to be able to simulate the realistic human behavior. Agents continuously repeat the process that includes perception, recognition, decision and behavior in the virtual world. Through those processes, the agents store new information in their memory or modify their knowledge if it is needed. This study mainly deals with the perception that is intermediate step between image processing and recognition. In this study, you will see how the agents perceive shapes. And you also will realize how it is possible to infer the part of shape that was partially hidden from the agent’s vision.
Download

Paper Nr: 108
Title:

TOWARD A MODEL OF CUSTOMER EXPERIENCE

Authors:

Michael Anaman and Mark Lycett

Abstract: Retaining profitable and high-value customers is a major strategic objective for many companies. In mature mobile markets where growth has slowed, the defection of customers from one network to another has intensified and is strongly fuelled by poor customer experience. In this light, this research-in-progress paper describes a strategic approach to the use of Information Technology as a means of improving customer experience. Using action research in a mobile telecommunications operator, a model is developed that evaluates disparate customer data, residing across many systems, and suggests appropriate contextual actions where experience is poor. The model provides value in identifying issues, understanding them in the context of the overall customer experience (over time) and dealing with them appropriately. The novelty of the approach is the synthesis of data analysis with an enhanced understanding of customer experience which is developed implicitly and in real-time.
Download

Paper Nr: 137
Title:

EVOLVING STRUCTURES FOR PREDICTIVE DECISION MAKING IN NEGOTIATIONS

Authors:

Marisa Masvoula, Panagiotis Kanellis and Drakoulis Martakos

Abstract: Predictive decision making increases the individual or joint gain of negotiators, and has been extensively studied. One particular skill of predicting agents is the forecast of their opponents’ future offers. Current systems focus on enhancing learning techniques in the decision making module of negotiating agents, with the purpose to develop more robust systems. Empirical studies are conducted in bounded problem spaces, where data distribution is known or assumed. Our proposal concentrates on the incorporation of learning structures in agents’ decision making, capable of forecasting opponents’ future offers even in open problem spaces, which is the case in most negotiation situations.
Download

Paper Nr: 139
Title:

Automatic Behaviour-based Analysis and Classification System for Malware Detection

Authors:

Xabier Cantero and Jaime Devesa

Abstract: Malware is any kind of program explicitly designed to harm, such as viruses, trojan horses or worms. Since the amount of malware is growing exponentially, it already poses a serious security threat. Therefore, every incoming code must be analysed in order to classify it as malware or benign software. These tests commonly combine static and dynamic analysis techniques in order to extract the major amount of information from distrustful files. Moreover, the increment of the number of attacks hinders manually testing the thousands of suspicious archives that every day reach antivirus laboratories. Against this background, we address here an automatised system for malware behaviour analysis based on emulation and simulation techniques. Hence, creating a secure and reliable sandbox environment allows us to test the suspicious code retrieved without risk. In this way, we can also generate evidences and classify the samples with several machine-learning algorithms. We have developed the proposed solution, testing it with real malware. Finally, we have evaluated it in terms of reliability and time performance, two of the main aspects for such a system to work.
Download

Paper Nr: 187
Title:

Creating and decomposing functions using fuzzy functions

Authors:

József Dombi and József D. Dombi

Abstract: In this paper we will present a new approach for composing and decomposing functions. This technology is based on the Pliant concept, which is a subclass of the fuzzy concept. We will create an effect by using the conjunction of the sigmoid function. After, we will make a proper transformation of an effect in order to define the neutral value. Then, by repeatedly applying this method, we can create effects. Aggregating the effects, we can compose the desired function. This tool is also capable of function decomposition as well, and can be used to solve a variety of real-time problems. Two advantages of our solution are that the parameters of our algorithm have a semantic meaning, and that it is also possible to use the values of the parameter in any kind of learning procedure.
Download

Paper Nr: 219
Title:

An Extensible Ensemble Environment for Time Series Forecasting

Authors:

Ricardo Choren, Ronaldo R. Goldschmidt and Claudio Vasconcelos

Abstract: There have been diverse works demonstrating that ensembles can improve the performance over any individual solution for time series forecasting. This work presents an extensible environment that can be used to create, experiment and analyse ensembles for time series forecasting. Usually, the analyst develops the individual solution and the ensemble algorithms for each experiment. The proposed environment intends to provide a flexible tool for the analyst to include, configure and experiment with individual solutions and to build and execute ensembles. In this paper, we describe the environment, its features and we present a simple experiment on its usage.
Download

Paper Nr: 310
Title:

Hybrid approach for incoherence detection based on neuro-fuzzy systems and expert knowledge

Authors:

Susana Martin-Toral, Yannis Dimitriadis and Gregorio I. Sainz-Palmero

Abstract: The way in which document collections are generated, modified or updated generates problems and mistakes in the information coherency, leading to legal, economic and social problems. To tackle this situation, this paper proposes the development of an intelligent virtual domain expert, based on summarization, matching and neuro-fuzzy systems, able to detect incoherences about concepts, values, or references, in technical documentation. In this scope, an incoherence is seen as the lack of consistency between related documents. Each document is summarized in the form of 4-tuples terms, describing relevant ideas or concepts that must be free of incoherences. These representations are then matched using several well-known algorithms. The final decision about the real existence of an incoherence, and its relevancy, is obtained by training a neuro-fuzzy system with expert knowledge, based on the previous knowledge of the activity area and domain experts. The final system offers a semi-automatic solution for incoherence detection and decision support.
Download

Paper Nr: 311
Title:

MINING FARMERS PROBLEMS IN WEB-BASED TEXUAL DATABASE APPLICATION

Authors:

Said Mabrouk, Samhaa R. El-Beltagy, Ahmed Rafea and Mahmoud Rafea

Abstract: VERCON (Virtual Extension and Research Communication Network) is an agriculture web-based application, developed to improve communication between agriculture research institutions and extension persons for the benefit of farmers and agrarian business. Farmers' problems component is one of VERCON main components. It is used to receive farmers' problems and provide them with solutions. Over the last five years, problems and their solutions have been accumulated in a textual database. This paper presents an integrated approach for mining these problems and their solutions. The opportunity and potential of mining and extracting information from this resource was identified with several objectives in mind, such as: a) discovering patterns and relations that can be used to enhance the utilization of this valuable resource, b) analyzing solutions given for similar problems, by different experts or by the same expert at different time in terms of their similarities and differences, and c) creating patterns of problems and their solutions that can be used to classify new problems and provide solutions without the need for domain expert.
Download

Paper Nr: 323
Title:

Konsultant: A knowledge base for automated interpretation of profit values

Authors:

Bojan Tomic and Tanja Milić

Abstract: Modern reporting systems and business intelligence tools provide various reports for everyday (business) use. Unfortunately, it seems that these reports contain mostly data and little or no information. The consequence is that the users need to manually analyze and interpret large quantities of data in order to get information on how the business is doing. A potential solution would be to automate this interpretation process by using a knowledge base that transforms data into information. In this paper we present Konsultant - a knowledge base for automated interpretation of annual profit values for enterprises.
Download

Paper Nr: 353
Title:

MODEL- AND SIMULATION DRIVEN SYSTEMS-ENGINEERING FOR CRITICAL INFRASTRUCTURE SECURITY ARCHITECTURES

Authors:

Philipp Rech and Sascha A. Goldner

Abstract: The design of systems (defined as a combination of organisations, processes, platforms, hardware and software) that support the protection of critical infrastructures or state borders against various threats is a complex challenge. This requests the use of both modelling and simulation in order to reach the operational goal. Threat scenarios like terrorist attacks, (organized-) crimes or natural disasters involve many different organisations, authorities, and technologies. These systems are often operated only by implicit known processes and diverse operational guidelines based on dissimilar legislations and overlapping responsibilities. In order to cope with these complex infrastructure systems and their interconnected processes, a scenario- and architecture based systems engineering approach must be implemented to design a solution architecture compliant with the requirements, internal and external demands. This paper gives an overview of the developed approach towards the system architecture and explains the different engineering steps in order to implement this architecture for real use-cases.
Download

Paper Nr: 368
Title:

A Hybridized Genetic Algorithm for Cost Estimation in Bridge Maintenance Systems

Authors:

Mohammed Iqbal, Abdunnaser Younes, Richard Lourenco, Nathan Good and Khaled Shaban

Abstract: A hybridized genetic algorithm is proposed to determine a repair schedule for a network of bridges. The schedule aims for the lowest overall cost while maintaining each bridge at satisfactory quality conditions. Appreciation, deterioration, and cost models are employed to model real-life behaviour. To reduce the computational time, pre-processing algorithms are used to determine an initial genome that is closer to the optimal solution rather than a randomly generated genome. A post-processing algorithm that locates a local optimal solution from the output of the genetic algorithm is employed for further reduction of computational costs. Experimental work was carried out to demonstrate the effectiveness of the proposed approach in determining the bridge repair schedule. The addition of a pre-processing algorithm improves the results if the simulation period is constrained. If the simulation is run sufficiently long all pre-processing algorithms converge to the same optimal solution. If a pre-processing algorithm is not implemented, however, the simulation period increases significantly. The cost and deterioration tests also indicate that certain pre-processing algorithms are better suited for larger bridge networks. The local search performed on the genetic algorithm output is always seen as a positive add-on to further improve results.
Download

Paper Nr: 386
Title:

AUTOMATIC SUMMARIZATION OF ARABIC TEXTS BASED ON RST TECHNIQUE

Authors:

Iskandar Keskes, Lamia H. Belguith, Philippe Blache and Mohamed H. Maaloul

Abstract: We present in this paper an automatic summarization technique of Arabic texts, based on RST. We first present an corpus study which enabled us to specify, following empirical observations, a set of relations and rhetorical frames. Then, we present our method to automatically summarize Arabic texts. Finally, we present the architecture of the ARSTResume system. This method is based on the Rhetorical Structure Theory (Mann, 1988) and uses linguistic knowledge. The method relies on three pillars. The first consists in locating the rhetorical relations between the minimal units of the text by applying the rhetorical rules. One of these units is the nucleus (the segment necessary to maintain coherence) and the other can be either nucleus or satellite (an optional segment). The second pillar is the representation and the simplification of the RST-tree that represents the entries text in hierarchical form. The third pillar is the selection of sentences for the final summary, which takes into account the type of the rhetorical relations chosen for the extract.
Download

Paper Nr: 387
Title:

A Model for Representing Vague Linguistic Terms and Fuzzy Rules for Classification in Ontologies

Authors:

Cristiane A. Yaguinuma, Tatiane Nogueira, Vinícius T. Ferraz, Marilde Terezinha Prado Santos and Heloisa Camargo

Abstract: Ontologies have been successfully employed in applications that require semantic information processing. However, traditional ontologies are not able to express fuzzy or vague information, which often occurs in human vocabulary as well as in several application domains. In order to deal with such restriction, concepts of fuzzy set theory should be incorporated into ontologies so that it is possible to represent and reason over fuzzy or vague knowledge. In this context, this paper proposes a model for representing fuzzy ontologies covering fuzzy properties and fuzzy rules, and we also implement fuzzy reasoning methods such as classical and general fuzzy reasoning, aiming to support classification of new instances based on fuzzy rules.
Download

Paper Nr: 432
Title:

DEALING WITH IMBALANCED PROBLEMS: ISSUES AND BEST PRACTICES

Authors:

Rodica Potolea and Camelia Lemnaru

Abstract: An imbalanced problem is one in which, in the available data, one class is represented by a smaller number of instances compared to the other classes. The drawbacks induced by the imbalance are analyzed and possible solutions for overcoming these issues are presented. In dealing with imbalanced problems, one should consider a wider context, taking into account the imbalance rate, together with other data-related particularities and the classification algorithms with their associated parameters.
Download

Paper Nr: 448
Title:

USING CREDIT AND DEBIT CARD PURCHASE TRANSACTION DATA FOR RETAIL SALES STATISTICS - Using Point Of Sale Data to measure consumer spending: The case of Moneris Solutions

Authors:

Lorant Szabo and Lorant Szabo

Abstract: This paper presents how Moneris Solutions stores the credit and debit card purchase transactions that it is processing for its merchants, and the methodology that was invented to process this data to produce consumer spending statistics. The transactions are extracted from the production systems at the end of each day and loaded into a warehouse. Aggregations are executed with pre-established frequency, or on an ad-hoc basis, for various merchant samples to calculate sales growth rates in multiple segments. The results are then matched against known events (e.g. Olympic Games) that may have impacted consumer spending during the analysed timeframe. Alternatively, the results may be presented to measure spending growth between any given two time periods in a specific geographical location or industry.
Download

Area 3 - Information Systems Analysis and Specification

Full Papers
Paper Nr: 11
Title:

PROCESS MINING FOR JOB NETS IN INTEGRATED COMPLEX COMPUTER SYSTEMS

Authors:

Shinji Kikuchi, Yasuhide Matsumoto, Motomitsu Adachi and Shingo Moritomo

Abstract: Batch jobs, such as shell scripts, programs and command lines, are used to process large amounts of data in large scale enterprise systems, such as supply chain management (SCM) systems. These batch jobs are connected and cascaded via certain signals or files so as to process various kinds of data in the proper order. Such connected batch jobs are called “job nets”. In many cases, it is difficult to understand the execution order of batch jobs in a job net because of the complexity of their relationships or because of lack of information. However, without understanding the behavior of batch jobs, we cannot achieve reliable system management. In this paper, we propose a method to derive a job net model representing the execution order of the job net from its logs (execution results) by using a process mining technique. Improving on the Heuristic Miner algorithm, we developed an analysis method which takes into account the concurrency of batch job executions in large scale systems. We evaluated our analysis method by a conformance check method using actual job net logs obtained from a large scale SCM system. The results show that our approach can accurately and appropriately estimate the execution order of jobs in a job net.
Download

Paper Nr: 69
Title:

Goal, Soft-goal and Quality Requirement

Authors:

Thi-Thuy-Hang Hoang and Manuel Kolp

Abstract: Requirements are input for the process of building software. Depending on the development methodology, they are usually classified into several subclasses of requirements. Traditional approaches distinguish between functional and non-functional requirements and the modern goal-based approaches use hard-goals and soft-goals to describe requirements. While non-functional requirements are known also as quality requirements, neither hard-goals nor soft-goals are equivalent to quality requirements. Due to the abstractness of quality requirements, they are usually described as soft-goals but soft-goals are not necessarily quality requirements. In this paper, we propose a way to clear the problematic ambiguity between soft-goals and quality requirements in goal-based context. We try to reposition the notion of quality requirement in the relations to hard-goals and soft-goals. This allows us to decompose a soft-goal into a set of hard-goals (required functions) and quality requirements (required qualities of function). The immediate applications of this analysis are quality-aware development methodologies for multi-agent systems among which QTropos is an example.
Download

Paper Nr: 79
Title:

A METHOD FOR PORTFOLIO MANAGEMENT AND PRIORITIZATION: AN INCREMENTAL FUNDING METHOD APPROACH

Authors:

Antonio Juarez Alencar, Gustavo Taveira, Antonio Alencar and Eber Schmitz

Abstract: In today’s very competitive business environment, making the best possible use of limited resources is crucial to achieve success and gain competitive advantage. To accomplish such a goal organizations have to maximize the return provided by their portfolio of future investments, choosing very carefully the IT projects they undertake and the risks they are willing to accept, otherwise they are bound to waste time and money, and still be likely to fail. This article introduces a method that enables managers to better evaluate the investment to be made in a portfolio of IT projects. The method favors the identification of common parts, avoiding the duplication of work efforts, and the selection of the implementation order that yields the highest payoff considering a given risk exposure policy. Moreover, it extends Denne and Cleland-Huang’s ideas on minimum marketable feature modules and uses both Decision Theory and the Principles of Choice to guide the decisions made under uncertainty.
Download

Paper Nr: 84
Title:

A Decision Framework for Selecting a Suitable Software Development Process

Authors:

Michel dos Santos Soares, Joseph Barjis, Jan van den Berg, Jos Vrancken and Itamar Sharon

Abstract: For streamlining the activities of software development, a number of software development processes has been proposed in the past few decades. Despite the relative maturity in the field, large companies involved in developing software are still struggling with selecting suitable software processes. This article takes up the challenge of developing a framework that supports decision makers in choosing an appropriate software development process for each individual project. After introducing the problem, the software development processes included in this research are presented. For being able to align software development processes and software projects, a number of project characteristics is next identified. Based on these two analyses, a decision framework is proposed that, given the project characteristics, determines the most appropriate software development process. In a first attempt to validate the framework, it has been applied onto two case studies where the outcomes of the decision framework are compared to those found by means of a collection of experts’ opinions. It was found that the framework and the experts yield similar outcomes.
Download

Paper Nr: 92
Title:

IDENTIFYING RUPTURES IN BUSINESS-IT COMMUNICATION THROUGH BUSINESS MODELS

Authors:

Juliana J. Ferreira and Fernanda Baião

Abstract: In scenarios where Information Technology (IT) becomes a critical factor for business success, Business-IT communication problems raise difficulties for reaching strategic business goals. Business models are considered as an instrument through which this communication may be held. This work argues that the business model communicability (i.e., the capability of a business model to facilitate Business-IT communication) influences on how Business and IT areas understand each other and on how IT teams identify and negotiate appropriate solutions for business demands. Based on the semiotic theory, this article proposes business models communicability as an important aspect to be evaluated for making Business-IT communication cycle possible, and describes an exploratory study to identify communication ruptures in the evaluation of business models communicability.
Download

Paper Nr: 159
Title:

Managing Data Dependency Constraints Through Business Processes

Authors:

Joe Lin and Shazia Sadiq

Abstract: Business Process Management (BPM) and related tools and systems have generated tremendous advantages for enterprise systems as they provide a clear separation between process, application and data logic. In spite of the abstraction value that BPM provides through explicit articulation of process models, a seamless flow between the data, application and process layers has not been fully realized in mainstream enterprise software, thus often leaving process models disconnected from underlying business semantics captured through data and application logic. The result of this disconnect is disparity (and even conflict) in enforcing various rules and constraints in the different layers. In this paper, we propose to synergise the process and data layers through the introduction of data dependency constraints, that can be modelled at the process level, and enforced at the data level through a (semi) automated translation into DBMS native procedures. The simultaneous and consistent specification ensures that disparity between the process and data logic can be minimized.
Download

Paper Nr: 162
Title:

A CONCEPTUAL FRAMEWORK FOR THE DEVELOPMENT OF APPLICATIONS CENTRED ON CONTEXT AND EVIDENCE-BASED PRACTICE

Authors:

Expedito Lopes, Ulrich Schiel, Vaninha Vieira and Ana C. Salgado

Abstract: A conceptual framework aims to provide a class diagram that can be used as the basis for the modelling of an application domain. Its use considerably facilitates the productivity of the data modelling phase and hence the development of applications, since it preserves portability and usability across domains. Evidence-Based Practice (EBP), usually employed in Medicine, represents a decision-making process centered on justifications of relevant information. EBP is used in several areas; however, we did not found conceptual models involving EBP that preserves portability and usability across domains. Besides, the decision-making context can have an impact on evidence-based decision-making, but the integration of evidence and context is still an open issue. This work presents a conceptual framework that integrates evidence with context applying it to the conceptual modelling phase for EBP domains. The use of context allows filtering out more useful information. The main contributions are: incorporation of contextual information into EBP procedures and presentation of the proposed conceptual framework. Also an implementation that uses the filtering of contextual information to support evidence-based decision making in the area of crime prevention is presented to validate the framework.
Download

Paper Nr: 203
Title:

A NEW TECHNIQUE FOR IDENTIFICATION OF RELEVANT WEB PAGES IN INFORMATIONAL QUERIES RESULTS

Authors:

Paolo Napoletano, Luca Greco and Fabio Clarizia

Abstract: In this paper we present a new technique for retrieving relevant web pages in informational queries results. The proposed technique, based on a probabilistic model of language, is embedded in a traditional web search engine. The relevance of aWeb page has been obtained through the judgment of human beings which, referring to continue scale, have assigned a degree of importance to each of the analyzed websites. In order to validate the proposed method a comparison with a classic engine is presented showing comparison based on a measure of Precision and Recall and on a measure of distance with respect to the measure of significance obtained by humans.
Download

Paper Nr: 222
Title:

Towards Extending IMS LD with Services and Context Awareness: Application to a Navigation and Fishing Simulator

Authors:

Maha Khemaja, Slimane Hammoudi and Valérie Monfort

Abstract: - A few e-Learning platforms propose a solution for ubiquity and context aware adaptability. Current standards, as Design (LD), require an extension to propose context awareness. Based on previous related works, we define a fully interoperable and learner (ambient) context adaptable platform, by using meta modeling based approach mixing MDD, parameterized transformations, and models composition. The scope of this paper is to extend LD meta model as a first step. We use a concrete software engineering industrial product that was promoted by French Government.
Download

Paper Nr: 273
Title:

A MODEL-DRIVEN APPROACH TO MANAGING AND CUSTOMIZING SOFTWARE PROCESS VARIABILITIES

Authors:

Fellipe A. Aleixo, Marília Aranha Freire, Uirá Kulesza, Marilia Freire, Fellipe Aleixo and Wanderson D. Santos

Abstract: This paper presents a model-driven approach to manage and customize software process variabilites. It promotes the productivity increase through: (i) the process reuse; and (ii) the integration and automation of the definition, customization, deployment and execution activities of software processes. Our approach is founded on the principles and techniques of software product lines and model-driven engineering. In order to evaluate the feasibility of our approach, we have designed and implemented it using existing and available technologies.
Download

Paper Nr: 332
Title:

Enterprise Architecture: State of the Art and Challenges

Authors:

Jorge Duarte and Mamede Lima-marques

Abstract: This paper analyzes current approaches for Enterprise Architecture (EA). Current EA objectives, concepts, frameworks, models, languages, and tools are discussed. The main initiatives and existing works are presented. Strengths and weaknesses of current approaches and tools are discussed, particularly their complexity, cost and low utilization. A EA theoretical framework is provided using a knowledge management approach. Future trends and some research issues are discussed. A research agenda is proposed in order to reduce EA complexity and make it accessible to organizations of any size.
Download

Paper Nr: 366
Title:

COMPOSITIONAL VERIFICATION OF BUSINESS PROCESSES MODELLED WITH BPMN

Authors:

Luis E. Mendoza, María A. Perez and Manuel I. Capel-Tuñón

Abstract: A specific check that is required to be performed as part of the Business Process Modelling (BPM) is on whether the activities and tasks described by Business Processes (BPs) are sound and well-coordinated. In this work we present how the Model-Checking verification technique for software can be integrated within a Formal Compositional Verification Approach (FVCA) to allow the automatic verification of BPs modelled with Business Process Modelling Notation (BPMN). The FVCA is based on a formal specification language with composition constructs. A timed semantics of BPMN defined in terms of the Communicating Sequential Processes + Time (CSP+T) extends untimed BPMN modelling entities with timing constrains in order to detail the behaviour of BPs during the execution of real scenarios that they represent. With our proposal we are able to specify and to develop the Business Process Task Model (BPTM) of a target business system. In order to show a practical use of our proposal, a BPTM of an instance of a BPM enterprise-project related to the Customer Relationship Management (CRM) business is presented.
Download

Paper Nr: 376
Title:

STOOG: STYLE-SHEETS-BASED TOOLKIT FOR GRAPH VISUALIZATION

Authors:

Guillaume Artignan and Mountaz Hascoët

Abstract: The information visualization process can be described as a set of transformations applied to raw data to produce interactive graphical representations of information for end-users. A challenge in information visualization is to provide flexible and powerful ways of describing and controlling this process. Most work in this domain address the problem at either application level or toolkit level. Approaches at the toolkit level are devoted to developers while approaches at the application level are devoted to end-users. Our approach build on previous work but goes one step beyond by proposing a unifying view of this process that can be used by both developers and end-users. Our contribution is a system named STOOG with a three-fold contribution: (1) a style-sheets based language for the explicit description of the transformation process at stake in information visualization, (2) an application that automatically builds interactive and graphical visualization of data based on style-sheet descriptions, (3) a user interface devoted to the conception of style sheets. In this paper, we present STOOG basic concepts and mechanisms and provide a case study to illustrate the benefits of using STOOG for the interactive and visual exploration of information.
Download

Short Papers
Paper Nr: 43
Title:

An Access Control Model for Massive Collaborative Edition

Authors:

Juliana de Melo Bezerra, Juliana M. Bezerra, Edna M. Santos and Celso M. Hirata

Abstract: The Web has enabled the elaboration of documents, through editing systems, by a large number of users collaboratively. An access control model is essential to discipline the user access in these systems. We propose an access control model for massive collaborative edition based on RBAC, which takes into account workflow, document structure and organizational structure. Workflow is used to coordinate the collaboration of participants. The document structure allows the parallel edition of parts of the document; and organizational structure allows easier management of users. We designed and implemented an editing system based on the model. We show that the model is useful to specify collaborative editing tools and can be used to categorize and identify collaborative editing systems.
Download

Paper Nr: 56
Title:

INVESTIGATING THE ROLE OF UML IN THE SOFTWARE MODELING AND MAINTENANCE: A PRELIMINARY INDUSTRIAL SURVEY

Authors:

Carmine Gravino, Giuseppe Scanniello and Genny Tortora

Abstract: In the paper we present the results of an industrial survey conducted with the Italian software companies that employ a relevant part of the graduate students of the University of Basilicata and of the University of Salerno. The survey mainly investigates the state of the practice regarding the use of UML in the software development and maintenance The results reveals that the majority of the companies use UML for modeling software systems (in the analysis and design phases) and for performing maintenance operations. Moreover, maintenance operations are mainly performed by low experienced practitioners.
Download

Paper Nr: 57
Title:

A GENERIC METHOD FOR BEST PRACTICE REFERENCE MODEL APPLICATION

Authors:

Stefanie Looso

Abstract: The perceived importance of the topic IT governance increased in the last decade. Best practice reference models (like ITIL, COBIT, or CMMI) promise support for diverse challenges IT departments are confronted with. Therefore, the interest in best practice reference models grows and more and more companies apply BPRM to support their IT governance. But there is limited knowledge about how BPRM are applied and there is no structured method to support the application and lift the full potential of BPRM. Therefore, this paper presents the construction and evaluation of a generic method for the application of BPRM. Following the language-based approach of method engineering, elements of methods will be derived and formally described. The criteria of design science research presented by Hevner et al., 2004 will be applied to the evaluation of the constructed method. Intention of this research is to reduce the inefficiencies caused by the inconsistent use of best practice reference models.
Download

Paper Nr: 67
Title:

A CONSOLIDATED ENTERPRISE REFERENCE MODEL Integrating McCarthy’s and Hruby’s Resource-Event-Agent Reference Model

Authors:

Wim Laurier, Maxime Bernaert and Geert Poels

Abstract: This paper introduces a new REA reference model that integrates the transaction and conversion reference models provided by McCarthy, which aimed at designing databases for accounting information systems, and Hruby, which aimed at software development for enterprise information systems, into a single conceptual model that accounts for both inter-enterprise and intra-enterprise processes. This consolidated reference model was developed to support data integration between multiple enterprises and different kinds of enterprise information system (e.g. ERP, accounting and management information systems). First, the state of the art in REA reference models is addressed presenting McCarthy’s and Hruby’s reference models and assessing their ability to represent exchanges (e.g. product for money), transfers (e.g. shipment) and transformations (e.g. production process). Second, the new, integrated REA enterprise reference model is introduced. Third, object model templates for transfers and transformations are presented, demonstrating that the integrated REA reference model is able to represent exchanges, transfers and transformations, where McCarthy’s and Hruby’s reference models can each only represent two of these features.
Download

Paper Nr: 68
Title:

Socialization of Work Practice through Business Process Analysis

Authors:

Mukhammad Andri Setiawan and Shazia Sadiq

Abstract: In today’s competitive business era, having the best practice business process is fundamental to the success of an organisation. Best practice reference models are generally created by experts in the domain, but often the best practice can be implicitly derived from the work practices of actual workers within the organization. In this paper, we propose to utilize the experiences and knowledge of previous business process users to inform and improve the current practices, thereby bringing about a socialization of work practice. We have developed a recommendation system to assist users to select the best practices of previous users through an analysis of business process execution logs. Recommendations are generated based on multi criteria analysis applied to the accumulated process data and the proposed approach is capable of extracting meaningful recommendations from large data sets in an efficient way.
Download

Paper Nr: 73
Title:

Integrating and optimizing business process execution in P2P environments

Authors:

Marco Fernandes, Marco Pereira, Joaquim Arnaldo Martins and Joaquim Sousa Pinto

Abstract: Service oriented applications and environments and peer-to-peer networks have become widely researched topics recently. This paper addresses the benefits and issues of integrating both technologies in the scope of business process execution. It also presents proposals to reduce network traffic and improve the efficiency of such applications.
Download

Paper Nr: 97
Title:

AN ESTIMATION PROCEDURE TO DETERMINE THE EFFORT REQUIRED TO MODEL BUSINESS PROCESSES

Authors:

Claudia Cappelli, Flavia Santoro, Vanessa Nunes, Marcio Barros and José R. Dutra

Abstract: Business processes modeling projects are increasingly widespread in organizations. Companies have several processes to be identified and modeled. They usually invest much in hiring expert consultants to do such job. However, they still find no guidelines to help them estimate how much a process modeling project will cost or how long this will take. We propose an approach to estimate the effort required to conduct a BPM project and discuss results obtained from over 50 projects in a large Brazilian company.
Download

Paper Nr: 116
Title:

APPLYING AN MDA PROCESS MODELING APPROACH

Authors:

Ana F. Magalhães, Bruno C. da Silva, Rita P. Maciel, Bruno da Silva and Ana Patrícia F. Magalhães Mascarenhas

Abstract: In order to use the MDA approach, several software processes have been defined over recent years. However, there is a need for specifying and maintaining MDA software process definitions systematically which can also support process enactment, reutilization, evolution, management and standardization. Some empirical investigations have been performed concerning the usage of several MDA-related approaches. In this paper we describe our technique for MDA software process specification and enactment, including tool support. We also present case studies and the concluding results on the application of our approach for process modeling.
Download

Paper Nr: 136
Title:

Adaptive Execution of Software Systems on Parallel Multicore Architectures

Authors:

Thomas Rauber and Gudula Rünger

Abstract: Software systems are often implemented based on a sequential flow of control. However, new developments in hardware towards explicit parallelism within a single processor chip require a change at software level to participate in the tremendous performance improvements provided by hardware. Parallel programming techniques and efficient parallel execution schemes that assign parallel program parts to cores of the target machine for execution are required. In this article, we propose a design approach for generating parallel software for existing business software systems. The parallel software is structured such that it enables a parallel execution of tasks of the software system based on different execution scenarios. The internal logical structure of the software system is used to create software incarnations in a flexible way. The transformation process is supported by a transformation toolset which preserves correctness and functionality.
Download

Paper Nr: 141
Title:

Ontology-Based Autonomic Computing For Resource Sharing Between Data Warehouses In Decision Support Systems

Authors:

Vlad Nicolicin-Georgescu, Vlad N. Georgescu, Vincent Benatier, Remi Lehn and Henri Briand

Abstract: Complexity is the biggest challenge in managing information systems today, because of the continuous growth in data and information. As decision experts, we are faced with the problems generated by managing Decision Support Systems, one of which is the efficient allocation of shared resources. In this paper, we propose a solution for improving the allocation of shared resources between groups of data warehouses within a decision support system, with the Service Levels Agreements and Quality of Service as performance objectives. We base our proposal on the notions of autonomic computing, by challenging the traditional way of autonomic systems and by taking into consideration decision support systems’ special characteristics such as usage discontinuity or service level specifications. To this end, we propose the usage of specific heuristics for the autonomic self-improvement and integrate aspects of semantic web and ontology engineering as information source for knowledge base representation, while providing a critical view over the advantages and disadvantages of such a solution.
Download

Paper Nr: 186
Title:

A Federated Triple Store Architecture For Healthcare Applications

Authors:

Bruno Alves, Michael Schumacher and Fabian Cretton

Abstract: Interoperability has the potential to improve care processes and decrease costs of the healthcare system. The advent of enterprise ICT solutions to replace costly and error-prone paper-based records did not fully convince practitioners, and many still prefer traditional methods for their simplicity and relative security. The Medicoordination project, which integrates several partners in healthcare on a regional scale in French speaking Switzerland, aims at designing eHealth solutions to the problems highlighted by reluctant practitioners. In a derivative project and through a complementary approach to the IHE XDS IT Profile, we designed, implemented and deployed a prototype of a semantic registry/repository for storing medical electronic records. We present herein the design and specification for a generic, interoperable registry/repository based on the technical requirements of the Swiss Health strategy. Although this paper presents an overview of the whole architecture, the focus will be on the registry, a federated semantic RDF store, managing metadata about medical documents. Our goals are the urbanization of information systems through SOA and ensure a level of interoperability between different actors.
Download

Paper Nr: 188
Title:

STRATEGIC REASONING IN SOFTWARE DEVELOPMENT

Authors:

Yves Wautelet, Sodany Kiv, Vi Tran and Manuel Kolp

Abstract: Software developments tend to be huger and of strategic importance in nowadays buisnesses. That is why Product Lifecycle Management (PLM) is no more only a domain for middle managers and software engieering professionals but top managers require rich models leading to visions on which they can perform strategic analysis for determining their adequacy with long term objectives. The i* approach with its social modeling capabilities as well as service oriented modeling are part of this effort. The strategic services model combines those two approaches and defines a couple of environmental factors (namely threats and opportunities) enabling strategic reasoning on the bases of the enterprise "high-level" added values. Such a framework offers the adequate agregation level for enabling top managers to take the adequate long term decisions for information systems development. The aim of this paper is to illustrate the strategic services model application as well as a strategic reasoning in the context of the development of a collaborative software application for supply chain management. This case study is from particular interest since it must be adopted by several actors played by cooperating or competing companies that have to figure out the consequences of the adoption of such software.
Download

Paper Nr: 195
Title:

MODELING TIME CONSTRAINTS IN INTER-ORGANIZATIONAL WORKFLOWS

Authors:

Mouna Makni, Nejib Ben Hadj-Alouane, Samir Tata and Moez Yeddes

Abstract: This paper deals with the integration of temporal constraints within the context of Inter-Organizational Workflows (IOWs). Obviously, expressing and satisfying time deadlines is important for modern business processes, and need to be optimized for efficiency and extreme competitiveness. In this paper, we propose a temporal extension to CoopFlow, an existing approach for designing and modeling IOWs, based on Time Petri Net models and tools. Methods are given, based on reachability analysis and model checking techniques, for verifying whether or not the added temporal requirements are satisfied, while maintaining the core advantage of CoopFlow; i.e. that each partner can keep the critical parts of its business process private.
Download

Paper Nr: 212
Title:

SPECIFICATION AND INSTANTIATION OF DOMAIN SPECIFIC PATTERNS BASED ON UML

Authors:

Saoussen Rekhis Boubaker, Saoussen Rekhis, Nadia Bouassida and Rafik Bouaziz

Abstract: Domain-specific design patterns provide for architecture reuse of reoccurring design problems in a specific software domain. They capture domain knowledge and design expertise needed for developing applications. Moreover, they accelerate software development since the design of a new application consists in adapting existing patterns, instead of modeling one from the beginning. However, some problems slow their expansion because they have to incorporate flexibility and variability in order to be instantiated for various applications in the domain. This paper proposes new UML notations that better represent the domain-specific design patterns. These notations express variability of patterns to facilitate their comprehension and guide their reuse. The UML extensions are, then, illustrated in the process control system context using an example of an acquisition data pattern.
Download

Paper Nr: 213
Title:

INTENTION DRIVEN SERVICE COMPOSITION WITH SERVICE PATTERNS

Authors:

Emna Fki, Chantal Soulé-dupuy, SAID TAZI and Mohamed Jmaiel

Abstract: Service-oriented architecture (SOA) is an emerging approach for building systems based on interacting services. Services need to be discovered and composed in order to meet user needs. Most of the time, these needs correspond to some kind of intentions. Therefore these needs are not expressed in a technical way but in an intentional way. We note that available service descriptions and those of their composition are rather technical. So, the matching between these services and user needs is not a simple issue. This paper defines an intention driven service description model. It introduces a dynamic service composition mechanism which provides the intention achievement of a user within a given context. We consider service patterns and we investigate how these patterns can be used in representing reusable generic services and in generating specific composite services.
Download

Paper Nr: 224
Title:

ENGINEERING AGENT-BASED INFORMATION SYSTEMS: A CASE STUDY OF AUTOMATIC CONTRACT NET SYSTEMS

Authors:

Vincent Couturier, Marc-Philippe Huget and David TELISSON

Abstract: In every business the tender has become an indispensable part to foster the negotiation of new trade agreements. The selection and the attribution are nowadays a long process conducted manually. It is necessary to define criteria for selecting the best offer, evaluate each proposal and negotiate a business contract. In this paper, we present an approach based on agents for the development of an automatic award of contracts (here called Automatic Contract Net Systems). The selection and negotiation are then automatically performed through communication between agents. We focus in this paper on the tendering and selection of the best offer. To facilitate the development of complex systems such as multi-agent systems, we adopt software patterns that will guide the designer in the analysis, design and implementation on an agent-based execution platform.
Download

Paper Nr: 226
Title:

QUALITY MEASUREMENT MODEL FOR REQUIREMENTS ENGINEERING FLOSS TOOLS

Authors:

María A. Perez, Edumilis M. Ortega, Kenyer Dominguez and Luis E. Mendoza

Abstract: The goal of delivering a suitable or quality software product increases the properly definition of system requirements. Requirements Engineering (RE) is the process of discovering, refining, modeling and specifying software requirements. In addition to the trend of using Free/Libre Open Source Software (FLOSS) tools, we should consider their strengths and weaknesses towards in the light of a suitable RE. This article is aimed at proposing a quality measurement model for RE FLOSS tools and supporting their selection process. Characteristics selected for its evaluation include Functionality, Maintainability and Usability. This model was applied to four FLOSS tool and assessed for completeness, accuracy and relevance to establish which FLOSS tools support RE, either totally or partially, thus making it useful for Small and Medium-sized Enterprises.
Download

Paper Nr: 243
Title:

MODELING ERP BUSINESS PROCESSES USING LAYERED QUEUEING NETWORKS

Authors:

Stephan Gradl, Manuel Mayer, Holger Wittges and Helmut Krcmar

Abstract: This paper presents an approach how to simulate enterprise resource planning systems (ERP) using Layered Queueing Networks (LQN). A case study of an existing production planning process shows how LQN models can be exploited as a performance analysis tool. To gather data about the internal ERP system’s architecture, an internal trace is analyzed and a detailed model is built to evaluate system’s performance and scalability in terms of response times with an increasing number of users and CPUs. It is shown, that the solving results match the characteristics in practice. Depending on the number of CPUs, constant response times are observed up to a certain number of concurrent users.
Download

Paper Nr: 246
Title:

Assessing The Interference In Concurrent Business Processes

Authors:

Nick van Beest, Nick Szirbik and Hans Wortmann

Abstract: Current Enterprise Information Systems support the business processes of organizations by explicitly or implicitly managing the activities to be performed and storing the data required for employees to do their work. However, concurrent execution of business processes still may yield undesired business outcomes as a result of process interference. As the disruptions are primarily visible to external stakeholders, organizations are often unaware of these cases. In this paper, a method is presented along with an operational tool that enables to identify the potential interference and analyze the severity of the interference resulting from concurrently executed processes. This method is subsequently applied to a case to verify the method, and reinforce the relevance of the problem.
Download

Paper Nr: 250
Title:

Modeling data interoperability for e-services

Authors:

Jose Delgado

Abstract: In global, distributed systems, services evolve independently and there is no dichotomy between compile and run-time. This has severe consequences. Static data typing cannot be assumed. Data typing by name and reference semantics become meaningless. Garbage collection cannot be used in this context and (references to) services can fail temporarily at one time or another. Classes, inheritance and instantiation also don’t work, because there is no coordinated compile-time. This paper proposes a service interoperability model based on structural conformance to solve these problems. The basic modeling entity is the resource, which can be described by structure and by behavior (service). We contend that this model encompasses and unifies layers found separate in alternative models, in particular Web Services and RESTful services.
Download

Paper Nr: 256
Title:

CONTEXT-BASED PROCESS LINE

Authors:

Vanessa Nunes, Vanessa Nunes, Claudia Werner and Flavia Santoro

Abstract: Complexity and dynamism of day-to-day activities in organizations are inextricably linked, one impacting the other, increasing the challenges for constant adaptation of the way to organize work to address emerging demands. In this scenario, there are a variety of information, insight and reasoning being processed between people and systems, during process execution. We argue that process variations could be decided in real time, using context information collected. This paper presents a proposal for a business process line cycle, with a set of activities encapsulated in the form of components as central artefact. We explain how composition and adaptation of work may occur in real time and discuss a scenario for this proposal.
Download

Paper Nr: 265
Title:

FRAMEWORKS FOR UNDERSTANDING THE CHANGING ROLE OF INFORMATION SYSTEMS IN ORGANIZATIONS

Authors:

Jorge Duarte and Mamede Lima-marques

Abstract: Information systems (IS) evolve with advances in technology in a silent but radical manner. Initially monolithic and isolated, now they are modular, diversified, integrated and ubiquitous. This ubiquity is not always planned. New applications arise quickly and spontaneously. New technology trends such as BPM, SOA and EA accentuate the pace of change. The wide range of applications of IS brings complexity to the planning of strategic actions. We identify a lack of research that address the complexity of IS planning. This work makes a study of several aspects of IS and identifies some frameworks, that integrated, help to understand IS issues and enable the planning of IS actions integrated with organizational objectives.
Download

Paper Nr: 270
Title:

COLLABORATIVE BUSINESS PROCESS ELICITATION THROUGH GROUP STORYTELLING

Authors:

João R. Gonçalves, Flavia Santoro and Fernanda Baião

Abstract: Business Process Modelling remains a costly and complex task for most organizations. One of the main difficulties lies on the process elicitation phase, where the process analyst attempts to extract information from the process’ participants and other resources involved. This paper describes a case study in which a previously proposed Story Mining method was applied. The Story Mining method and its supporting tool, ProcessTeller, makes use of collaborative storytelling and natural language processing techniques for a semi-automatic extraction of BPMN-compliant business process elements from text.
Download

Paper Nr: 274
Title:

A SPEM BASED SOFTWARE PROCESS IMPROVEMENT META-MODEL

Authors:

Rodrigo Espindola and Jorge Audy

Abstract: Nowadays the organizations are using Software Process Improvement (SPI) reference models as the starting point to their quality improvement initiatives. There is a consensus that by understanding and improving the software process we could achieve the improvement of the software product as well. Several studies also indicate the concurrent adoption of multiples SPI reference models by the organizations. The need for new approaches to integrate those SPI reference models with each other and with the software process developed aiming compliance with them has increased. This paper propose a SPEM based SPI meta-model as a way to support those kinds of integration.
Download

Paper Nr: 278
Title:

ON THE APPLICATION OF AUTONOMIC AND CONTEXT-AWARE COMPUTING TO SUPPORT HOME ENERGY MANAGEMENT

Authors:

Boris Shishkov, Marten van Sinderen and Martijn Warnier

Abstract: Conventional energy sources are becoming scarce and with no (eco-friendly) alternatives deployed at a large scale, it is currently important finding ways to better manage energy consumption. We propose in this paper ICT-related solution directions that concern the energy consumption management within a household. In particular, we consider two underlying objectives, namely: (i) to minimize the energy consumption in households; (ii) to avoid energy consumption peaks for larger residential areas. The proposed solution directions envision a service-oriented approach that is used to integrate ideas from Autonomic Computing and Context-aware Computing: the former influences our considering a selective on/off powering of thermostatically controlled appliances, which allows for energy redistribution over time; the latter influences our using context information to analyze the energy requirements of a household at a particular moment and based on this information, appliances can be powered down. Household-internally, this can help adjusting energy consumption as low as it can be with no violation of the preferences of residents. Area-wise, this can help avoiding energy consumption peaks. It is expected thus that such an approach can contribute to the reduction of home energy consumption in an effective and user-friendly way. Our proposed solution directions are not only introduced and motivated but also partially elaborated through a small illustrative example
Download

Paper Nr: 301
Title:

Risk Analysis for Inter-organizational Controls

Authors:

Joris Hulstijn and Jaap Gordijn

Abstract: Existing early requirements engineering methods for dealing with governance and control issues do not explicitly support comparison of alternative solutions, and have no clear semantics for the notion of a control problem. In this paper we present a method for doing risk analysis of inter-organizational business models, which is based on value modeling. A risk is the likelihood of a negative event multiplied by its impact. In value modeling, the impact of a control problem is given by the missing value. The likelihood can be estimated based on assumptions about trust and about the underlying coordination model. This allows us to model the expected value of a transaction. The approach is illustrated by a comparison of the risks of different electronic commerce scenarios for delivery and payment.
Download

Paper Nr: 325
Title:

Performance Management and Control: A Case Study in Mercedes-Benz Cyprus

Authors:

Angelika Kokkinaki and Angelos Vouldis

Abstract: This paper is a case study that outlines design and implementation issues related to an application that facilitates process management and controls business performance issues in an enterprise that sells extended products. The notion of an extended product is that of a product bundled with services. Towards this aim, the case study described in this paper focuses three objectives have been set: to review existing theory on the subject of designing and developing applications and interfaces in enterprises; to identify the target users’ requirements for the design and development of such applications; and to combine primary and secondary research results towards design and development of a quality management system. The value of the research relates primarily to its knowledge contribution to the wider field of designing and developing applications and interfaces for enterprises retailing extended products.
Download

Paper Nr: 326
Title:

AN APPROACH FOR THE DEVELOPMENT OF DOOH-ORIENTED INFORMATION SYSTEMS

Authors:

Maurizio Tucci, Pietro D’ambrosio and Filomena Ferrucci

Abstract: The last years are characterised by an increasing demand of using digital services and multimedia content “out of home”. This poses new challenges to software factories in terms of integration and extension systems. In this paper, we report on an industrial research project realized by some ICT companies together with some researchers of the University of XXX. The goal of the project was to define a new approach for developing Enterprise systems able to integrate traditional applications with Digital Out Of Home (DOOH) extensions. The experience was carried out defining a methodology and some tools to develop such type of systems in an industrial context. The proposed approach was evaluated carrying out two case studies.
Download

Paper Nr: 327
Title:

Requirements for Personal Knowledge Management Tools

Authors:

Max Völkel

Abstract: Personal knowledge management (PKM) is a crucial element as well as complement of enterprise knowledge management (EKM) which has been largely neglected by Enterprise Information Systems, up to now. This paper collects requirements for a specific class of PKM software, which supports personal note taking and the idea of extending the human memory by information management. It introduces the knowledge-cue life cycle which describes how information artefacts can be used for helping to denote, remember, use, and further develop knowledge embodied in people’s heads. Based on this life cycle and on a literature study, this paper derives a comprehensive requirements catalogue to be fulfilled by knowledge articulation tools used in PKM. This requirements list can be used as a design specification and research agenda for PKM tool builders, and to assess the suitability of existing tools for PKM.
Download

Paper Nr: 355
Title:

THE CONVERGENCE OF WORKFLOWS, BUSINESS RULES AND COMPLEX EVENTS - Defining a Reference Architecture and Approaching Realization Challenges

Authors:

Markus Döhring, Lars M. Karg, Eicke Godehardt and Birgit Zimmermann

Abstract: For years, research has been devoted to the introduction of flexibility to enterprise information systems. There are corresponding concepts for mainly three established paradigms: workflow management, business rule management and complex event processing. It has however been indicated that the integration of the three paradigms with respect to their meta-models and execution principles yields significant potential for more efficient and flexible enterprise applications and that there is still a lack in conceptual and technical guidance for their integration. The contribution of this work is a loosely coupled architecture integrating all three paradigms. This includes a clear definition of its building blocks together with the main realization challenges. In this context, an approach for assisting modelers in solving the question which paradigm should be used in which way for expressing a particular business aspect is presented.
Download

Paper Nr: 377
Title:

THE SNARE LANGUAGE OVERVIEW

Authors:

Alexandre Barão and Alberto R. Silva

Abstract: Social network systems identify existing relations between social entities and provide a set of automatic inferences on these relations, promoting better interactions and collaborations between these entities. However, we find that most of existing organizational information systems do not provide, from scratch, social network features, even though they have to manage somehow social entities. The focus on this paper starts from this fact, and proposes the SNARE Language as the conceptual framework for SNARE system, short for “Social Network Analysis and Reengineering Environment”. The SNARE’s purpose is to promote social network capabilities in information systems not designed originally for the effect. Visual models are needed to infer and represent new or established patterns of relations. This paper overviews the SNARE language and shows its applicability through several models regarding the application of the SNARE to the LinkedIn real scenario.
Download

Paper Nr: 381
Title:

COMBINING SEMANTIC TECHNOLOGIES AND DATA MINING TO ENDOW BSS/OSS SYSTEMS WITH INTELLIGENCE

Authors:

Javier Martínez Elicegui, Germán T. del Valle and Marta D. Francisco Marcos

Abstract: Businesses need to "reduce costs" and improve their “time-to-market" to compete in a better position. Systems must contribute to these two goals through good designs and technologies that give them agility and flexibility towards change. Semantics and Data Mining are two key pillars to evolve the current legacy systems towards smarter systems that adapt to changes better. In this article we present some solutions to evolve the existing systems, where the end user has the possibility of modifying the functioning of the systems incorporating new business rules in a Knowledge Base.
Download

Paper Nr: 394
Title:

Understanding Access Control Challenges in Loosely-Coupled Multidomain Environment

Authors:

Yue Zhang and James Joshi

Abstract: Access control to ensure secure interoperation in multidomain environment is a crucial challenge. A multidomain environment can be categorized as tightly-coupled or loosely-coupled. The specific access control challenges in loosely-coupled environment have not been studied adequately in the literature. In this paper, we analyze the access control challenges specific to the loosely-coupled environment. Based on our analysis, we propose a general decentralized secure interoperation framework for loosely-coupled environment based on Role Based Access Control (RBAC). We believe our work takes the first step towards a more complete secure inter-operation solution for loosely-coupled environment.
Download

Paper Nr: 417
Title:

TOWARDS A COMMUNITY FOR INFORMATION SYSTEM DESIGN VALIDATION

Authors:

Sophie Dupuy-Chessa, Dominique Rieu and Nadine Mandran

Abstract: Information systems become ubiquitous. This opens a large spectrum of possibilities for end-users, but the design complexity is increasing. So domain specific languages are proposed sometimes supported by appropriate processes. These proposals are interesting but they are under validated. Even if validation is a difficult task, which requires specific knowledge, we argue that validation should be systematic. But many problems remain to be considered to achieve this goal: 1) computer scientists are often not trained to evaluation; 2) the domain of information systems design validation and evaluation is still under construction. To cope with the first problem, we propose to capitalize evaluation practices into patterns so that they can be reusable for non-specialists of validation practices. For the second issue, we propose an environment where evaluation specialists and engineering methods specialists can work together to define their common and reusable patterns.
Download

Paper Nr: 424
Title:

TOWARDS A MORE RELATIONSHIP-FRIENDLY ONTOLOGY FOUNDATION FOR CONCEPTUAL MODELLING

Authors:

Roger Tagg

Abstract: Researchers have for some years been looking to the field of Ontology to provide a foundation structure of meaning which would provide a yardstick against which different modelling systems and methodologies can be evaluated. The Bunge-Wand-Weber ontology (BWW) has led the field in this endeavour, but since 2000 has undergone some criticism. A notable feature of BWW is that it does not treat relationships as first-class objects. Several recent proposals have proposed ontologies that do emphasize relationships, although to a somewhat limited extent. Based on previous work on a relationship-oriented ontology, this paper suggests directions in which a Mark 2 BWW could be evolved.
Download

Paper Nr: 430
Title:

Model-Driven Engineering of Functional Security Policies

Authors:

Frédéric Gervais, Marc Frappier, Richard St-Denis, Michel E. Jiague, Frédéric Gervais, Jérémy Milhau, Régine Laleau and Pierre Konopacki

Abstract: This paper describes an ongoing project on the specification and automatic implementation of functional security policies. We advocate a clear separation between functional behavior and functional security requirements. We propose a formal language to specify functional security policies. We are developing techniques by which a formal functional security policy can be automatically implemented. Hence, our approach is highly inspired from model-driven engineering. Furthermore, our formal language will enabled us to use model checking techniques to verify that a security policy satisfies desired properties.
Download

Paper Nr: 434
Title:

User Context Models : a Framework to ease software formal verifications

Authors:

Amine Raji and Phillipe Dhaussy

Abstract: Several works emphasize the difficulties of software verification applied to embedded systems. In past years, formal verification techniques and tools were widely developed and used by the research community. However, the use of formal verification at industrial scale still difficult, expensive and requires lot of time. This is due to the size and the complexity of manipulated models, but also, to the important gap between requirement models manipulated by different stackholders and formal models required by existing verification tools. In this paper, we fill this gap by providing the UCM framework to automatically generate formal models used by formal verification tools. At this stage of our work, we generate behavior models of environment actors interacting with the system directly from an extended form of use cases. These behavioral models can be composed directly with the system automata to be verified using existing model checking tools.
Download

Paper Nr: 438
Title:

ANALYSIS OF EFFECTIVE APPROACH FOR BUSINESS PROCESS RE-ENGINEERING From the Perspective of Organizational Factor

Authors:

Kayo Iizuka, Yasuki Iizuka, Kayo Iizuka, Yasuki Iizuka and Kazuhiko Tsuda

Abstract: This paper presents analysis results of business process re-engineering (BPR) effects including customer satisfaction and their formative factors. BPR has have been studied for a few decades, however, there are additional issues which come into existence these years, e.g., balance of efficiency and internal control (including information security management), organization reform or enterprise integration including which causes from economic circumstances of these years. Analyses in this paper are aimed to counterpart these issues. Trying to clarifying the mechanism for reaching to BPR effectiveness, analysis is focused on organization perspectives and communication infrastructure.
Download

Paper Nr: 440
Title:

A Process-Driven Methodology for Modeling Service-Oriented Complex Information Systems

Authors:

Alfredo Cuzzocrea, Alessandra De Luca and Salvatore Iiritano

Abstract: This paper extends state-of-the-art design methodologies for classical information systems by introducing an innovative methodology for designing service-oriented information systems. Service-oriented information systems can be viewed as information systems adhering to the novel service-oriented paradigm, to which a plethora of novel technologies, such as Web Services, Grid Services and Cloud Computing, currently marry. On the other hand, actual state-of-the-art literature encloses few papers that focus the attention on this yet-interesting research challenge. With the aim of fulfilling this gap, in this paper we provide a process-driven methodology for modeling service-oriented complex information systems, and we prove its effectiveness and reliability on a comprehensive case study represented by a real-life research project.
Download

Paper Nr: 444
Title:

SITUATIONAL METHOD ENGINEERING APPLIED FOR THE ENACTMENT OF DEVELOPMENT PROCESSES An Agent Based Approach

Authors:

Benjamin Honke, Holger Seemüller, Bernhard Bauer and Holger Voos

Abstract: Interdisciplinary product development is faced with the collaboration of diverse roles and a multitude of interrelated artifacts. Traditional and sequential process models cannot deal with the long-lasting and dynamic behavior of the development processes of today. Moreover, development processes have to be tailored to the needs of the projects, which are usually distributed today. Thus, keeping these projects on track from a methodology point of view is difficult. In order to deal with these challenges, this paper will present a novel method engineering and enactment approach. It combines the ideas of workflow technologies and product line engineering for method engineering as well as agent technology for the development process enactment.
Download

Paper Nr: 12
Title:

GEOPROFILE: UML profile for conceptual modeling of geographic databases

Authors:

Gustavo Sampaio, Filipe Nalon and Jugurta Lisboa-Filho

Abstract: After many years of research in the field of conceptual modeling of geographic databases, experts have produced different alternatives of conceptual models. However, still today, there is no consensus on which is the most suitable one for modeling applications of geographic data, which brings up a number of problems for field advancement. A UML Profile allows a structured and precise UML extension, being an excellent solution to standardize domain-specific modeling, as it uses the entire UML infrastructure. This article proposes a UML profile developed specifically for conceptual modeling of geographic databases called GeoProfile. This is not a definite proposal; we view this work as the first step towards the unification of the various existing models, aiming primarily at semantic interoperability.
Download

Paper Nr: 28
Title:

A gap abalysis tool for SMEs targeting ISO/IEC 27001 compliance

Authors:

Thierry Valdevit and Nicolas Mayer

Abstract: Current trends indicate that information security is critical for today’s enterprises. As managers realise they cannot ignore the potential security risks, they tend to turn to the ISO/IEC 27001 standard, in order to implement an Information Security Management System (ISMS). While being adopted by large companies, ISMS are still considered as out of range by numerous smaller entities. To help SMEs to access to ISO/IEC 27001 certification is still a challenge. In this context, the initial step of an ISMS implementation project is significant: a gap analysis highlighting the current status of the enterprise with regards to the standard, and thus the resources needed to succeed in this project. This paper presents the method and research works performed in order to design, experiment and improve a SME-oriented gap analysis tool for ISO/IEC 27001.
Download

Paper Nr: 53
Title:

EVALUATING UML SEQUENCE MODELS USING THE SPIN MODEL CHECKER

Authors:

Yoshiyuki Shinkawa

Abstract: UML sequence diagram is one of the most important diagrams for behavior modeling, however there are few established criteria, methodologies and processes to evaluate the correctness of the models depicted by this diagram. This paper proposes a formal approach to evaluating the correctness of UML sequence models using the SPIN model checker. In order to deal with the models by the SPIN, they must be expressed in the form of Promela codes and LTL formula. A set of definite rules is presented, which can extract the above codes and formulae from given UML sequence models.
Download

Paper Nr: 74
Title:

WEB SERVICES DEPLOYMENT ON P2P NETWORKS, BASED ON JXTA

Authors:

Marco Pereira, Marco Fernandes, Joaquim Arnaldo Martins and Joaquim Sousa Pinto

Abstract: Digital libraries require several different services. A common way to provide services to Digital Libraries is with the use of Web Services but Web Services are usually based on centralised architectures. In our work we propose that Peer-to-peer networks can be used to relieve the weight of centralised components on the Digital Library infrastructure. One of the challenges of this approach is to develop a method that allows the integration of existing web services into this new reality. In this paper we propose the use of a proxy that allows exposing Web Services that are only accessible trough the peer-to-peer network to the outside world. This proxy also manages interactions with existing external Web Services making them appear as part of the network itself, thus enabling us to reap the benefits of both architectures.
Download

Paper Nr: 78
Title:

Model-Driven System Testing of Service Oriented Systems

Authors:

Michael Felderer, Joanna Chimiak-Opoka and Ruth Breu

Abstract: This paper presents a novel standard-aligned approach for model-driven system testing of service oriented systems based on tightly integrated but separated platform-independent system and test models. Our testing methodology is capable for test-driven development and guarantees high quality system and test models by checking consistency resp. coverage. Our test models are executable and can be considered as part of the system definition. We show that our approach is suited to handle important system testing aspects of service oriented systems such as the integration of various service technologies or testing of service level agreements. We also provide full traceability between functional resp. non-functional requirements, the system model, the test model, and the executable services of the system which is crucial for efficient test evaluation. The system model and test model are aligned with existing specifications SoaML and the UML Testing Profile via a mapping of metamodel elements. The concepts are presented on an industrial case study.
Download

Paper Nr: 88
Title:

A generic metric for measuring the complexity of models

Authors:

Christian Schalles, Christian Schalles, Michael Rebstock and John Creagh

Abstract: In recent years, various object and process oriented modelling methods were developed to support the process of modelling in enterprises. When applying these methods, graphical models are generated and used to depict various aspects of enterprise architectures. Concerning this, surveys analyzing modelling languages in different ways were conducted. In many cases these surveys include experimental data collection methods. At this juncture the complexity of concrete models often affects output of these studies. To ensure complexity value comparability of different models, a generic metric for measuring complexity of models is proposed.
Download

Paper Nr: 128
Title:

FROM STRATEGIC TO CONCEPTUAL ENTERPRISE INFORMATION REQUIREMENTS: A mapping tool

Authors:

Giovanni Pignatelli and Gianmario Motta

Abstract: Enterprise Information analysis can be modeled on three levels: Logical, Conceptual and Strategic. Logical level is used daily on thousand of projects to design databases. Conceptual level is used by analysts to structure detailed information needs expressed by users. Strategic level is used by IT and user management to define Enterprise Information Architecture and/or to assess the viability of the current information assets. While mapping conceptual onto logical modeling is well-established, strategic and conceptual levels are poorly linked. This drawback very often prevents enterprise to implement a sound information strategy. We here present a method that maps strategic enterprise information into conceptual information modeling. For strategic modeling a comprehensive framework is used that enables to readily identify information domains of a wide range of enterprises. Mapping strategic to conceptual models is performed by a set of simple and predefined rules. The paper also illustrates the tool that has been developed to assist the whole design and mapping process. Finally a case study on materials handling exemplifies our approach.
Download

Paper Nr: 165
Title:

AN EFFICIENT METHOD FOR GAME DEVELOPMENT USING COMPILER

Authors:

Jae S. Jeong and Soongohn Kim

Abstract: As the development of an online game is being more and more extensive, higher manpower become more essential in the development for the game. Especially in programming, it happens that that original scheme that came from the planning department could not be fully developed and expressed in the programmer, depending on the ability of the programmer, coming out with a different result which is less enjoyable than the programmer expected. It is essential to spend much time for checking the error came from planning or programming. Due to solve these kinds of problems, we have developed the complier for only games which uses API for game graphic and API for quest that have been used in development for game. The compiler helps the game planner to find out logical problem directly and manually through manual source coding. Also, through the special game compiler, it would help the game developer to come out with a various kinds of efficient plans. It has the advantages of lowering dependency on the game programmer as well as to lower the cost of production and labor resources
Download

Paper Nr: 174
Title:

A FRAMEWORK FOR ESTIMATION OF THE ENVIRONMENTAL COSTS OF THE TECHNOLOGICAL ARCHITECTURE - An approach to reducing the organizational impact on environment

Authors:

Jorge Cavaleiro, André Vasconcelos and André Pedro

Abstract: Green IT was developed from the growing concerns about the rise of the information systems energy cost and power consumption along with the need for an environmentally efficient image of the organization. This work addresses and links three main concerns: enterprise architecture modelling, need for energy cost cuts and energy efficiency of information technology (IT). It describes a new method to estimate the IT architecture energy costs and CO2 emissions, based on the technology layer of enterprise architecture, and some solutions for solving or, at least, reducing the environmental impact of the latter.
Download

Paper Nr: 180
Title:

ON ENTERPRISE INFORMATION SYSTEMS ADDRESSING THE DEMOGRAPHIC CHANGE

Authors:

Silvia Schacht, Silvia Schacht and Alexander Mädche

Abstract: The demographic change influences nearly each area of life such as health care, education, social systems and productivity of companies. Therefore, appropriate adjustments to an aging society are necessary. To date, many studies have been carried out which weighed the advantages and disadvantages of the various alternatives. The impact of an older workforce on productivity when using enterprise information systems was considered only sparsely. The following paper presents our research intention to design and develop enterprise information systems addressing the demographic change.
Download

Paper Nr: 194
Title:

THE JANUS FACE OF LEAN ENTERPRISE INFORMATION SYSTEMS An Analysis of Signs, Systems and Practices

Authors:

Coen Suurmond

Abstract: The term “Enterprise Information System” can be used in two ways: (1) to denote the comprehensive structure of information flows in an Enterprise, or (2) to denote the computer system supporting the business processes. In this paper I will argue that we always should take the first meaning as our starting point, and that in analysing the use of information and information flows we should recognise the different kinds of sign systems that are appropriate to different kinds of use of information in business processes. In system development we should carefully analyse the way a specific company is operating, and design a comprehensive information system according to the principles of Lean Thinking and according to the conversational maxims of Grice.
Download

Paper Nr: 209
Title:

Validation of a Measurement Framework of Business Process and Software System Alignment

Authors:

Carmine Grasso, Lerina Aversano, Carmine Grasso and Maria Tortorella

Abstract: The alignment degree existing between a business process and the supporting software systems expresses how the software systems support the business process. This measure can be used for indicating business requirements that the software systems do not implement. Methods are needed for detecting the alignment level existing between software systems and business processes and identifying the software changes to be performed for increasing and keeping an adequate alignment level. This paper proposes a framework including a set of metrics codifying the alignment concept with the aim of measuring it, detecting misalignment, identifying and performing software evolution changes. The framework is, then, validated through a case study.
Download

Paper Nr: 233
Title:

THE IEEE STANDARDS COMMITTEE P1788 FOR INTERVAL ARITHMETIC REQUIRES AN EXACT DOT PRODUCT

Authors:

Ulrich Kulisch and Van Snyder

Abstract: Abstract. Computing with guarantees is based on two arithmetical features. One is fixed (double) precision interval arithmetic. The other one is dynamic precision interval arithmetic, here also called long interval arithmetic. The basic tool to achieve high speed dynamic precision arithmetic for real and interval data is an exact multiply and accumulate operation and with it an exact dot product. Actually the simplest and fastest way for computing a dot product is to compute it exactly. Pipelining allows to compute it at the same high speed as vector operations on conventional vector processors. Long interval arithmetic fully benefits from such high speed. Exactitude brings very high accuracy, and thereby stability into computation. This document is intended to provide some background information, to increase the awareness, and to informally specify the implementation of an exact dot product.
Download

Paper Nr: 253
Title:

COMPONENT-ORIENTED MODEL-BASED WEB SERVICE ORCHESTRATION

Authors:

Suela Berisha, Jacques Hamalian and Béatrice Rumpler

Abstract: The Web service orchestration enables cross departmental coordination of business processes in a heterogeneous environment. We have designed a component-oriented web service orchestration based on service functionality in an Enterprise Service Bus (ESB). Our orchestration model is centralized-based, where a single service is the central coordinator containing the composition logic of the other services involved. These services are modelled using Service Component Architecture (SCA) specification, defined in our UML profile described in this paper. SCA conforms to SOA (Service Oriented Architecture) principles using CBSE (Component-Based Software Engineering) techniques. Finally, our specified orchestration model is implemented using Model-Driven Architecture Approach (MDA).
Download

Paper Nr: 254
Title:

REACT-MDD: Reactive Traceability in Model-Driven Development

Authors:

Marco Costa

Abstract: The development of information systems has evolved to a complex task, regarding a multitude of programming and modelling paradigms, notations and technologies. Tools like integrated development environments (IDE), computer aided systems engineering (CASE) and relational database systems (RDBMS), among others, evolved to a reasonable state and are used do generate different types of artefacts needed in this context. ReacT-MDD is a traceability between artefacts open model that was instantiated in a prototype. We present and discuss some practical issues of ReacT-MDD in the context of reactive traceability, which is also described.
Download

Paper Nr: 258
Title:

Extending DEMO - Control Organization Model

Authors:

José Tribolet, David Aveiro, A. Rito Silva and José Tribolet

Abstract: In this paper we present part of an extension to the Design and Engineering Methodology for Organizations (DEMO) – a proposal for an ontological model for the generic Control Organization that we argue that exists in every organization. With our proposal, DEMO can now be used to explicitly specify critical properties of an organization – that we call measures – whose value must respect certain restrictions imposed by other properties of the organization – that we call viability norms. We can now also precisely specify, with DEMO, defined resilience strategies that control and eliminate dysfunctions – violations of viability norms.
Download

Paper Nr: 268
Title:

Towards a Process to Information System Development With Distributed Teams

Authors:

Gislaine C. Leal, Cesar Alberto Silva, Elisa Huzita and Tania Tait

Abstract: The software engineering area offers support to manage the information systems development process. Software engineering processes have artifacts, roles and activities well defined, allowing adjustments in specific situations. However, these processes are generally oriented to development of coallocated projects. They are not taken into account peculiarities in regard to coordination, control and communication, when the development is with distributed teams. The purpose of this paper is to contribute to the Software Engineering area presenting a comparative analysis of some development processes from the perspective of development with distributed teams. Additionally, it points to the need to process definition that includes the peculiarities of this approach to software development.
Download

Paper Nr: 306
Title:

A Lean Enterprise Architecture for Business Process Re-Engineering and Re-Marketing

Authors:

Clare Comm and Dennis Mathaisel

Abstract: An agile enterprise is an organization that rapidly adapts to market and environmental changes in the most productive and cost-effective ways. To be agile, an enterprise should utilize the key principles of enterprise architecting enabled by the most recent developments in information and communication technologies and lean principles and. This paper describes a Lean Enterprise Architecture (LEA) to organize the activities for the transformation of the enterprise to agility. It is the application of systems architecting methods to design, develop, produce, construct, integrate, validate, and implement a lean enterprise using information engineering and systems engineering methods and practices. LEA is a comprehensive framework used to align an organization's information technology assets, people, operations, and projects with its operational characteristics. The architecture defines how information and technology support the business operations and provide benefit for its stakeholders.The architecting process incorporates lean attributes and values as design requirements in creating the enterprise. The application of the LEA is less resource intensive and disruptive to the organization than the traditional lean enterprise transformation methods and practices. Thus, it is essential that the merits of this process are re-marketed (communicated) to the stakeholders to encourage its acceptance.
Download

Paper Nr: 340
Title:

Coloured Petri Nets with Parallel Composition to Separate Concerns

Authors:

Ella Roubtsova and Ashley Mcneile

Abstract: We define a modeling language based on combining Coloured Petri Nets with Protocol Modeling semantics. This language combines the expressive power of Coloured Petri Nets in describing behavior with the ability provided by Protocol Modeling to compose partial behavioral descriptions. The resultant language can be considered as a domain specific Coloured Petri Net based language for deterministic and constantly evolving systems.
Download

Paper Nr: 343
Title:

EMPLOYEES' UPSKILLING THROUGH SERVICE-ORIENTED LEARNING ENVIRONMENTS

Authors:

Boris Shishkov, Marten van Sinderen and Roberto Cavalcante

Abstract: Aiming to increase their competitiveness, many companies are turning to new learning concepts and strategies that stress collaboration among employees. Both individual learning and organizational learning should be considered in relation to create a synergetic effect when introducing and maintaining skills among employees, and as well when it is necessary to train personnel in non-core competences. Regarding such kind of training, the possibility to collaborate with third parties is advantageous, especially in co-creating and/or using learning content. In this paper, we propose ICT-related solution directions concerning an adaptable, flexible, and collaborative upskilling of employees. We consider, in particular, two underlying objectives, namely adjustability of content+process and collaborative content co-creation. The mentioned adjustability of content+process is about the specialization of generic content+process, driven by the user (employee), and it is also about individualism. The mentioned collaborative content co-creation is about a (cross-border) dynamic creation of content by several persons. Our proposed solution directions (introduced and motivated in this paper) envision an approach that is used to integrate ideas pointing to some computing paradigms that concern SOA – Service-Oriented Architecture. It is expected that such an approach can contribute to the upskilling of employees that is done in an effective and user-friendly way.
Download

Paper Nr: 370
Title:

COMPATIBILITY VERIFICATION OF COMPONENTS IN TERMS OF FUNCTIONAL AND EXTRA-FUNCTIONAL PROPERTIES Tool Support

Authors:

Kamil Jezek and Premek Brada

Abstract: The component-based programming, as a technology increasing development speed and decreasing cost of the final product, promises a noticeable improvement in a process of development of large enterprise applications. Even though component-based programming is a promising technology it still has not reached its maturity. The main problem addressed in this paper are compatibility checks of components in terms of functional and extra-functional properties and their insufficient tool support. This paper summarizes a mechanism of component compatibility checks and introduces a tool whose aim is to fill this gap mainly with respect to the phase of testing the assembly of components. The introduced mechanism and the tool allow to check components bindings before deployment into the target environment. It displays a component graph, details of components and highlights incompatibility problems. Hence, the tool validates the presented mechanism and provides useful support for developers when deciding which component to use.
Download

Paper Nr: 390
Title:

Ontological Configurator: a Novel Approach

Authors:

Fabio Clarizia, Francesco Colace and Massimo De Santo

Abstract: The ability to create customized product configurations which satisfies a user’ needs is one of the major aims desired by companies. In particular, the design and implementation of web-based services that allows users to create and customize in a simple and intuitive way the desired product. One research area that has lacked progress is the definition of a common vocabulary that enables consumer-to-manufacturer and manufacturer-to-manufacturer communication. Enabling this communication opens possibilities such as the ability to express customer requirements correctly and to exchange knowledge between manufacturers. A popular approach to express knowledge uses ontological formalisms. Using ontology, a vocabulary can be defined that allows interested parties to specify and share common knowledge and serves to define a framework for the representation of the knowledge. With this aim, this paper presents an ontology-based configurator system that finds the configuration that maximizes the user’s needs starting from the desired requirements, the available components, the context information and previous similar configurations. The process by which the system finds the candidate configuration follows an approach known as “Slow Intelligence”. This paper presents in detail the proposed approach and presents its first application.
Download

Paper Nr: 403
Title:

HEIDEGGER AND PEIRCE: Learning with the giants - Critical insights for IS design

Authors:

Ângela L. Nobre and Ângela L. Nobre

Abstract: Martin Heidegger’s ontology and Charles Sanders Peirce semiotics offer a vastly unexplored potential in terms of IS design and development. Though there are several authors who have explored these giants’ works, such contributions have seldom been disseminated and applied within concrete IS and organisations. There is an urgent need to further develop the insights from these scholars, in particular within the current context of post-industrial society. The links between formal and informal processes, between tacit and explicit knowledge and between diachronic and synchronic analysis are critical for the understanding of today’s competitiveness. And Heidegger’s and Peirce’s works are crucial for a better grasp and optimisation of current complexity at organisational level.
Download

Paper Nr: 407
Title:

CATAPHORIC LEXICALISATION

Authors:

Robert Foster

Abstract: A traditional lexicalization analysis of a word looks backwards in time, describing each change in the word’s form and usage patterns from actuation (creation) to the present day. I suggest that this traditional view of lexicalization can be labelled Anaphoric Lexicalization to reflect its focus upon what has passed. A corresponding forward-looking process can then be envisaged called Cataphoric Lexicalization. Applying Cataphoric Lexicalization to an existing phrase from a sub-language generates a series of possible lexemes that might represent the target phrase in the future. The method is illustrated using a domain specific example. The conclusion suggests that rigorous application of Cataphoric Lexicalization by a community would in time result in a naturally evolved controlled lexicon.
Download

Paper Nr: 420
Title:

Complex Networks and Complex Systems: A Nature-based Framework for Sustainable Product Design

Authors:

Joe Bradley and Roberto Aldunate

Abstract: Nature-based systems are efficiently designed and are able to respond to many system requirements such as scalability, adaptability, self-organization, resilience, robustness, durability, reliability, self-monitoring, self-repair,and many others. Using nature’s examples as guidepost, there is a unique platform for developing more environmentally sustainable products and systems. This position paper makes a case for an interdisciplinary approach to sustainable design and development. This paper suggests that the design should not only mimic natural behaviours but should benefit from natural phenomenon (e.g, wind turbines).The paper proposes a conceptual modeling system framework, whereby physical products and systems are designed and modeled with the added benefit of how similar systems work in nature. Developing such a system in nontrivial and requires an interdisciplinary approach. To realize this system will require a merging of analytical and computational models of nature systems and human-made systems into a single information system. In this position paper, we discuss the framework at a birds-eye view.
Download

Area 4 - Software Agents and Internet Computing

Full Papers
Paper Nr: 19
Title:

ASPECT-MONITOR: An Aspect-based Approach to WS-contract Monitoring

Authors:

Marcelo Fantinato, Alessandro Garcia, Mario Freitas da Silva, Maria B. Toledo and Itana Gimenes

Abstract: Contract monitoring is carried out to ensure the Quality of Services (QoS) attributes and levels specified in an electronic contract throughout a business process enactment. This paper proposes an approach to improve QoS monitoring based on the aspect-oriented paradigm. Monitoring concerns are encapsulated into aspects to be executed when specific process points are reached. Differently from other approaches, the proposed solution requires no instrumentation, uses Web services standards, and provides an integrated infrastructure for dealing with contract establishment and monitoring. Moreover, a Business Process Management Execution Environment is designed to automatically support the interaction between customer, provider and monitor organizations.
Download

Paper Nr: 86
Title:

LOCATING AND EXTRACTING PRODUCT SPECIFICATIONS FROM PRODUCER WEBSITES

Authors:

Ludwig Hähne, Maximilian Walther, Daniel Schuster and Alexander Schill

Abstract: Gathering product specifications from the Web is labor-intensive and still requires much manual work to retrieve and integrate the information in enterprise information systems or online shops. This work aims at significantly easing this task by introducing algorithms for automatically retrieving and extracting product information from producers’ websites while only being supplied with the product’s and the producer’s name. Compared to previous work in the field, it is the first approach to automate the whole process of locating the product page and extracting the specifications while supporting different page templates per producer. An evaluation within a federated consumer information system proves the suitability of the developed algorithms. They may easily be applied to comparable product information systems as well to minimize the effort of finding up-to-date product specifications.
Download

Paper Nr: 113
Title:

QUERYABLE SEPA MESSAGE COMPRESSION BY XML SCHEMA SUBTRACTION

Authors:

Stefan Böttcher, Rita Hartel and Christian Messinger

Abstract: In order to standardize the electronic payments within and between the member states of the European Union, SEPA (Single Euro Payments Area) – an XML based standard format – was introduced. As the financial institutes have to store and process huge amounts of SEPA data each day, the verbose structure of XML leads to a bottleneck. In this paper, we propose a compressed format for SEPA data that removes that data from a SEPA document that is already defined by the given SEPA schema. The compressed format allows all operations that have to be performed on SEPA data to be executed on the compressed data directly, i.e., without prior decompression. Even more, the queries being used in our evaluation can be processed on compressed SEPA data with a speed that is comparable to ADSL2+, the fastest ADSL standard. In addition, our tests show that the compressed format reduces the data size to 11% of the original SEPA messages on average, i.e., it compresses SEPA data 3 times stronger than other compressors like gzip, bzip2 or XMill – although these compressors do not allow the direct query processing of the compressed data.
Download

Paper Nr: 255
Title:

REPUTATION-BASED SELECTION OF WEB INFORMATION SOURCES

Authors:

Cinzia Cappiello, Donato Barbagallo, Chiara Francalanci and Maristella Matera

Abstract: The paper compares Google’s ranking with the ranking obtained by means of a multi-dimensional source reputation index. The data quality literature defines reputation as a dimension of information quality that measures the trustworthiness and importance of an information source. Reputation is recognized as a multi-dimensional quality attribute. The variables that affect the overall reputation of an information source are related to the institutional clout of the source, to the relevance of the source in a given context, and to the general quality of the source’s information content. We have defined a set of variables measuring the reputation of Web information sources along these dimensions. These variables have been empirically assessed for the top 20 sources identified by Google as a response to 100 queries in the tourism domain. Then, we have compared Google’s ranking and the ranking obtained along each reputation variable for all queries. Results show that the assessment of reputation represents a tangible aid to the selection of information sources.
Download

Paper Nr: 312
Title:

Towards Automated Simulation of Multi Agent Based Systems

Authors:

Ante Vilenica and Winfried Lamersdorf

Abstract: The simulation of systems offers a viable approach to find the optimal configuration of a system with respect to time, costs or any other utility function. In order to speed up the development process and to relieve developers from doing cumbersome work that is related to the execution of simulation runs, i.e. doing the simulation management manually, it is desirable to have a framework that provides tools which perform the simulation management automatically. Therefore, this work addresses this issue and presents an approach that reduces the effort to manage simulations, i.e. it eases and automates the execution, observation, optimization and evaluation. The approach consists of a declarative simulation description language and a framework that is capable of automatically managing simulations. Thereby, this approach reduces the costs, i.e. with respect to time and money, to perform simulations. Furthermore, this work targets a special subset of simulations, i.e. multi agent based simulation, that has grown large attention in many areas of science as well as in commercial applications in the last decade. The applicability of the approach is proven by a case study called "mining ore resources" that has been conducted.
Download

Paper Nr: 398
Title:

AN APPROACH FOR WEB SERVICE DISCOVERY BASED ON COLLABORATIVE STRUCTURED TAGGING

Authors:

Uddam Chukmol, Youssef Amghar and Aïcha-Nabila Benharkat

Abstract: This research work presents a folksonomic annotation used in a collaborative tagging system to enhance the Web service discovery process. More expressive than traditional tags, this structural tagging method is able to reflect the functional capability of Web services, easy to use and very much accessible to users than ontology or logic based formalism of annotation. We describe a Web service retrieval system exploiting the aforementioned structural tagging. System user profiling is also approached in order to further assign reputation and compute tag recommendation. We present some interesting usage scenarios of this system and also a strategy to evaluate its performance.
Download

Short Papers
Paper Nr: 20
Title:

THE IMPACT OF COMPETENCES ASSESSMENT SYSTEMS TO FIRM PERFORMANCE: A STUDY ON PROJECT MANAGEMENT E-ASSESSMENT

Authors:

Maria Dascalu, Camelia Delcea, Dragos Palaghita and Bogdan Vintila

Abstract: Current paper studies the relationship between the use of e-assessment in the field of project management and firms’ performance. For this purpose, a case study was especially designed to explore whether the extensive use of an e-assessment application influences the financial and non-financial indicators of firms’ performance. The participants in the study were the users of the e-assessment application, employees working in Romanian firms, from IT, education and consulting business sector. The main findings of the study were the positive correlation between the considered performance indicators and companies’ growth. The study is presented in the framework of learning at work and highlights the importance of using information systems in enterprise environments, to develop professional competences and, thus, to achieve business excellence.
Download

Paper Nr: 21
Title:

THE BEAUTY OF SIMPLICITY: UBIQUITOUS MICROBLOGGING IN THE ENTERPISE

Authors:

Martin Böhringer and Peter Gluchowski

Abstract: Microblogging has become a primary trend topic in the World Wide Web. Compared with the history of blogs, wikis and social networking services, its adoption in commercial enterprises seems to be the next logical step. However, in the case of enterprise microblogging there is not only the possibility of copying the public web’s functionality. In enterprise scenarios the mechanism of microblogging could have much more use cases than its public pendants, i.e., ‘tweeting’ processes, machines and software. We systematically develop a scenario for ubiquitous microblogging, which means a microblogging space including human and non-human information sources in an organisation. After presenting a conceptual description we discuss examples for the approach. Based on a comprehensive study of existing literature we finally present a detailed research agenda towards ubiquitous microblogging for enterprise information management.
Download

Paper Nr: 64
Title:

ADAPTING MULTIPLE-CHOICE ITEM-WRITING GUIDELINES TO AN INDUSTRIAL CONTEXT

Authors:

Robert Foster

Abstract: This paper proposes a guideline for writing MCQ items for our domain which promotes the use of Multiple Alternative Choice (MAC) items. This guideline is derived from one of the guidelines from the Taxonomy of item-writing guidelines reviewed by Haladyna et al, 2002. The new guideline is tested by delivering two sets of MCQ test items to a representative sample of candidates from the domain. One set of items complies with the proposed guideline and the other set of items does not. Evaluation of the relative effectiveness of the items in the experiment is achieved using established methods of item response analysis. The experiment shows that the new guideline is more applicable to the featured domain than the original guideline.
Download

Paper Nr: 133
Title:

IMPROVING MOODLE WITH WIRIS AND M-QIT

Authors:

Mora Bonilla Angel, ENRIQUE MÉRIDA CASERMEIRO and Domingo López

Abstract: Moodle is one of the most extended LMS. Moodle allows the ‘collaborative learning’ where students and teachers collaborate on the daily work. But problems with Moodle arise in our scientific context. Although Moodle can preview LaTeX code, not all the possibilities of LaTeX are available and it is very good for scientific teachers but hard for students, so, the mathematical formulas are usually replaced by non-standard versions of them written in ASCII. Another problem with Moodle is the difficulty of reusing quizzes. We present two tools that improve Moodle in three aspects: representation of math formulas, mathematical computation, and improving the learning units for the students. WIRIS is a powerful editor and allows interacting by using mathematic formulas in an easy way and developing new learning units with mathematical computation via web; M-QIT is a new tool able to manage and reutilize quizzes and questions available in previous courses in Moodle.
Download

Paper Nr: 158
Title:

USING FUZZY SET APPROACH IN MULTI-ATTRIBUTE AUTOMATED AUCTIONS

Authors:

Madhu Goyal and Saroj Kaushik

Abstract: This paper designs a novel fuzzy attributes and competition based bidding strategy (FAC-Bid), in which the final best bid is calculated on the basis of the assessment of multiple attributes of the goods and the competition for the goods in the market. The assessment of attributes adapts the fuzzy sets technique to handle uncertainty of the bidding process. The bidding strategy also uses and determines competition in the market (based on the two factors i.e. no. of the bidders participating and the total time elapsed for an auction) using Mamdani’s Direct Method. Then the final price of the best bid will be determined based on the assessed attributes and the competition in the market using fuzzy reasoning technique.
Download

Paper Nr: 164
Title:

Abstraction from Collaboration between Agents Using Asynchronous Message-Passing

Authors:

Bent B. Kristensen and Bent Bruun Kristensen

Abstract: Collaboration between agents using asynchronous message-passing is typically described in centric form distributed among the agents. An alternative associative form also by means of message-passing is shared between agents: This abstraction from collaboration is a descriptive unit and makes description of collaboration of agents simple and natural.
Download

Paper Nr: 182
Title:

SERVICE ACQUSITION AND VALIDATION IN A DISTRIBUTED SERVICE DISCOVERY SYSTEM CONSISTING OF DOMAIN- SPECIFIC SUB-SYSTEMS

Authors:

Deniz Canturk, Deniz CANTURK and Pinar Senkul

Abstract: With the increase in both number and size of available web services, web service discovery becomes a challenging activity. While new web services are emerging in different service registries and there is tremendous increment in the number of web services that are not registered to any of the business registries, finding the web services appropriate for a given need will result in problems in terms of performance, efficiency, end-to-end security and quality of the discovered services. In this paper, we introduce a distributed web service discovery system which consists of domain-specific sub-systems. We establish ontology based domain specific web service discovery sub-systems in order to discover web services on private sites and in business registries, and in order to improve service discovery in terms of efficiency and effectiveness. An important problem with effectiveness is to check whether a discovered service is active or not. Many services in the registries are not active any more. Distributing the discovery to domain-specific subsystems makes it possible to lower the service discovery duration and to provide almost up-to-date service status. In this work, we describe the service acquisition and validation steps of the system in more detail and we present experimental results on real services acquired on the web.
Download

Paper Nr: 249
Title:

B2C AND C2C E-MARKETPLACES: A MULTI-LAYER/MULTI-AGENT ARCHITECTURE TO SUPPORT THEM

Authors:

José J. Castro-Schez, González-morcillo Carlos, Raul Miguel, David Vallejo and Vanesa Herrera

Abstract: Due to the growing of the Internet and the users’ trend to acquire products via Internet, the e-commerce has become in a new very important business opportunity. In this paper we present a multi-agent and multi-layer architecture to support general purpose e-marketplaces. The presented system is compound of software agents that can be able to act in the users’ benefit reducing their search time and facilitating the product acquisition. These agents are grouped in different layers giving support to B2C and C2C e-marketplaces. The interaction between the different agents is explained through the paper although we remarks the agents used to search, acquire and recommend products. We have also used the fuzzy logic in these agents because we believe that is very useful in order to facilitate its use and reduce the search and acquisition time.
Download

Paper Nr: 289
Title:

Mobile e-learning - support services case study

Authors:

Catarina Maximiano and Vítor Basto-Fernandes

Abstract: Currently mobile devices and wireless communications are present in the daily tasks of our lives. m-Learning extends the e-Learning concept by the use of mobile computation and communication technological resources. Mobile computing focuses the paradigm of "anytime, anywhere access" that offers resources for distance education via mobile devices. This paradigm, allow that information is made available to users with greater flexibility and diversity, supporting learning in non conventional places and time schedules. The need for learning throughout life and flexibility of education profiles requires the support and development of new approaches in the educational context and tools to support learning. This paper presents a distance learning case study at Polytechnic Institute of Leiria. The main objective is the utilization of mobile devices as support tools for course information/contents resources access available in Learning Management Systems (in the presented case study - Moodle).
Download

Paper Nr: 324
Title:

APPLYING FIPA STANDARDS ONTOLOGICAL SUPPORT TO INTENTIONAL-MAS-ORIENTED UBIQUITOUS SYSTEM

Authors:

Milene Serrano and Carlos J. Pereira de Lucena