ICEIS 2009 Abstracts


Conference


Conference

Area 1 - Databases and Information Systems Integration

Full Papers
Paper Nr: 40
Title:

MIDAS: A Middleware for Information Systems with QoS Concerns

Authors:

Luis Fernando Orleans and Geraldo Zimbrão

Abstract: One of the most difficult tasks in the design of information systems is how to control the behaviour of the back-end storage engine, usually a relational database. As the load on the database increases, the longer issued transactions will take to execute, mainly because the presence of a high number of locks required to provide isolation and concurrency. In this paper we present MIDAS, a middleware designed to manage the behaviour of database servers, focusing primarily on guaranteeing transaction execution within an specified amount of time (deadline). MIDAS was developed for Java applications that connects to storage engines through JDBC. It provides a transparent QoS layer and can be adopted with very few code modifications. All transactions issued by the application are captured, forcing them to pass through an Admission Control (AC) mechanism. To accomplish such QoS constraints, we propose a novel AC strategy, called 2-Phase Admission Control (2PAC), that minimizes the amount of transactions that exceed the established maximum time by accepting only those transactions that are not expected to miss their deadlines. We also implemented an enhancement over 2PAC, called diffserv – which gives priority to small transactions and can adopted when their occurrences are not often.

Paper Nr: 108
Title:

Instance-based OWL Schema Matching

Authors:

Luiz André P. Paes Leme, Marco A. Casanova, Karin K. Breitman and Antonio L. Furtado

Abstract: Schema matching is a fundamental issue in many database applications, such as query mediation and data warehousing. It becomes a challenge when different vocabularies are used to refer to the same real-world concepts. In this context, a convenient approach, sometimes called extensional, instance-based or semantic, is to detect how the same real world objects are represented in different databases and to use the information thus obtained to match the schemas. This paper describes an instance-based schema matching technique for an OWL dialect. The technique is based on similarity functions and is backed up by experimental results with real data downloaded from data sources found on the Web.

Paper Nr: 123
Title:

The Integrative Role of IT in Product and Process Innovation: Growth and Productivity Outcomes for Manufacturing

Authors:

Louis Raymond, Anne-Marie Croteau and François Bergeron

Abstract: The assimilation of IT for business process integration plays an integrative role by providing an organization with the ability to exploit innovation opportunities with the purpose of increasing their growth and productivity. Based on survey data obtained from 309 Canadian manufacturing SMEs, this study aims at a deeper understanding of the assimilation of IT for business process integration with regard to product and process innovation. The first objective is to identify the effect of the assimilation of IT for business process integration on growth and productivity. The second objective is to verify if the assimilation of IT for business process integration varies amongst low, medium and high-tech SMEs. Results indicate that the assimilation of IT for business process integration depends upon the type of innovation. It also varies as per the technological intensity of the firms. The assimilation of IT for business process integration has two effects: it increases the growth of manufacturing SMEs by enabling product innovation; but it decreases their productivity by impeding the process innovation.

Paper Nr: 141
Title:

Vectorizing Instance-based Integration Processes

Authors:

Matthias Boehm, Dirk Habich, Steffen Preissler, Wolfgang Lehner and Uwe Wloka

Abstract: The inefficiency of integration processes—as an abstraction of workflow-based integration tasks—is often reasoned by low resource utilization and significant waiting times for external systems. Due to the increasing use of integration processes within IT infrastructures, the throughput optimization has high influence on the overall performance of such an infrastructure. In the area of computational engineering, low resource utilization is addressed with vectorization techniques. In this paper, we introduce the concept of vectorization in the context of integration processes in order to achieve a higher degree of parallelism. Here, transactional behavior and serialized execution must be ensured. In conclusion of our evaluation, the message throughput can be significantly increased.

Paper Nr: 142
Title:

Invisible Deployment of Integration Processes

Authors:

Matthias Boehm, Dirk Habich, Wolfgang Lehner and Uwe Wloka

Abstract: Due to the changing scope of data management towards the management of heterogeneous and distributed systems and applications, integration processes gain in importance. This is particularly true for those processes used as abstractions of workflow-based integration tasks; these are widely applied in practice. In such scenarios, a typical IT infrastructure comprises multiple integration systems with overlapping functionalities. The major problems in this area are high development effort, low portability and inefficiency. Therefore, in this paper, we introduce the vision of invisible deployment that addresses the virtualization of multiple, heterogeneous, physical integration systems into a single logical integration system. This vision comprises several challenging issues in the fields of deployment aspects as well as runtime aspects. Here, we describe those challenges, discuss possible solutions and present a detailed system architecture for that approach. As a result, the development effort can be reduced and the portability as well as the performance can be improved significantly.

Paper Nr: 148
Title:

Customizing Enterprise Software as a Service Applications: Back-end Extension in a Multi-tenancy Environment

Authors:

Jürgen Müller, Jens Krüger, Sebastian Enderlein, Marco Helmich and Alexander Zeier

Abstract: Since the emerge of Salesforce.com, more and more business applications tend to move towards Software as a Service. In order to target Small and Medium-sized Enterprises, platform providers need to lower their operational costs and establish an ecosystem of partners, customizing their generic solution, to push their products into spot markets. This paper categorizes customization options, identifies cornerstones of a customizable, multi-tenancy aware infrastructure, proposes a framework that encapsulates multi-tenancy, and introduces a technique for partner back-end customizations with regard to a given real-world scenario.

Paper Nr: 193
Title:

Pattern-based Refactoring of Legacy Software Systems

Authors:

Sascha Hunold, Björn Krellner, Thomas Rauber, Thomas Reichel and Gudula Rünger

Abstract: Rearchitecturing large software systems becomes more and more complex after years of development and a growing size of the code base. Nonetheless, a constant adaptation of software in production is needed to cope with new requirements. Thus, refactoring legacy code requires tool support to help developers performing this demanding task. Since the code base of legacy software systems is far beyond the size that developers can handle manually we present an approach to perform refactoring tasks automatically. In the pattern-based transformation the abstract syntax tree of a legacy software system is scanned for a particular software pattern. If the pattern is found it is automatically substituted by a target pattern. In particular, we focus on software refactorings to move methods or groups of methods and dependent member variables. The main objective of this refactoring is to reduce the number of dependencies within a software architecture which leads to a less coupled architecture.We demonstrate the effectiveness of our approach in a case study.

Paper Nr: 196
Title:

A Natural and Multi-layered Approach to Detect Changes in Tree-based Textual Documents

Authors:

Angelo Di Iorio, Michele Schirinzi, Fabio Vitali and Carlo Marchetti

Abstract: Several efficient and very powerful algorithms exist for detecting changes in tree-based textual documents, such as those encoded in XML. An important aspect is still underestimated in their design and implementation: the quality of the output, in terms of readability, clearness and accuracy for human users. Such requirement is particularly relevant when diff-ing literary documents, such as books, articles, reviews, acts, and so on. This paper introduces the concept of ’naturalness’ in diff-ing tree-based textual documents, and discusses a new extensible set of changes which can and should be detected. A naturalness-based algorithm is presented, as well as its application for diff-ing XML-encoded legislative documents. The algorithm, called JNDiff, proved to detect significantly better matchings (since new operations are recognized) and to be very efficient.

Paper Nr: 198
Title:

CrimsonHex: A Service Oriented Repository of Specialised Learning Objects

Authors:

José Paulo Leal and Ricardo Queirós

Abstract: The corner stone of the interoperability of eLearning systems is the standard definition of learning objects. Nevertheless, for some domains this standard is insufficient to fully describe all the assets, especially when they are used as input for other eLearning services. On the other hand, a standard definition of learning objects in not enough to ensure interoperability among eLearning systems; they must also use a standard API to exchange learning objects. This paper presents the design and implementation of a service oriented repository of learning objects called crimsonHex. This repository is fully compliant with the existing interoperability standards and supports new definitions of learning objects for specialized domains. We illustrate this feature with the definition of programming problems as learning objects and its validation by the repository. This repository is also prepared to store usage data on learning objects to tailor the presentation order and adapt it to learner profiles.

Paper Nr: 213
Title:

A Scalable Parametric-RBAC Architecture for the Propagation of a Multi-modality, Multi-resource Informatics System

Authors:

Remo Mueller, Van Anh Tran and Guo-Qiang Zhang

Abstract: We present a scalable architecture called X-MIMI for the propagation of MIMI (Multi-modality, Multi-resource, Informatics Infrastructure System) to the biomedical research community. MIMI is a web-based system for managing the latest instruments and resources used by clinical and translational investigators. To deploy MIMI broadly, X-MIMI utilizes a parametric Role-Based Access Control model to decentralize the management of user-role assignment, facilitating the deployment and system administration in a flexible manner that minimizes operational overhead. We use Formal Concept Analysis to specify the semantics of roles according to their permissions, resulting in a lattice hierarchy that dictates the cascades of RBAC authority. Additional components of the architecture are based on the Model-View-Controller pattern, implemented in Ruby-on-Rails. The X-MIMI architecture provides a uniform setup interface for centers and facilities, as well as a set of seamlessly integrated scientific and administrative functionalities in a Web 2.0 environment.

Paper Nr: 228
Title:

Minable Data Warehouse

Authors:

David Morgan, Jai W. Kang and James M. Kang

Abstract: Data warehouses have been widely used in various capacities such as large corporations or public institutions. These systems contain large and rich datasets that are often used by several data mining techniques to discover interesting patterns. However, before data mining techniques can be applied to data warehouses, arduous and convoluted preprocessing techniques must be completed. Thus, we propose a minable data warehouse that integrates the preprocessing stage in a data mining technique within the cleansing and transformation process in a data warehouse. This framework will allow data mining techniques to be computed without any additional preprocessing steps. We present our proposed framework using a synthetically generated dataset and a classical data mining technique called Apriori to discover association rules within instant messaging datasets.

Paper Nr: 236
Title:

A Step Forward in Semi-Automatic Metamodel Matching: Algorithms and Tool

Authors:

José de Sousa Jr., Denivaldo Lopes, Daniela Barreiro Claro and Zair Abdelouahab

Abstract: In recent years the complexity of producing softwares systems has increased due the continuous evolution of the requirements, the creation of new technologies and integration with legacy systems. When complexity increases the phases of software development, maintenance and evolution become more difficult to deal with, i.e. they became more subject to error-prone factors. Recently, Model Driven Architecture (MDA) has made the management of this complexity possible thanks to models and the transformation of Platform-Independent Model (PIM) in Platform-Specific Models (PSM). However, the manual creation of transformation definitions is a programming activity which is error-prone because it is a manual task. In the MDA context, the solution is to provide semiautomatic creation of a mapping specification that can be used to generate transformation definitions in a specific transformation language. In this paper, we present an algorithm to match metamodels and enhancements in the MT4MDE and SAMT4MDE tool in order to implement this matching algorithm.

Paper Nr: 238
Title:

A Study of Indexing Strategies for Hybrid Data Spaces

Authors:

Changqing Chen, Sakti Pramanik, Qiang Zhu and Gang Qian

Abstract: Different indexing techniques have been proposed to index either the continuous data space (CDS) or the non-ordered discrete data space (NDDS). However, modern database applications sometimes require indexing the hybrid data space (HDS), which involves both continuous and non-ordered discrete subspaces. In this paper, the structure and heuristics of the ND-tree, which is a recently-proposed indexing technique for NDDSs, are first extended to the HDS. A novel power value adjustment strategy is then used to make the continuous and discrete dimensions comparable and controllable in the HDS. An estimation model is developed to predict the box query performance of the hybrid indexing. Our experimental results show that the original ND-tree’s heuristics are effective in supporting efficient box queries in the hybrid data space, and could be further improved with our proposed strategies to address the unique characteristics of the HDS.

Paper Nr: 299
Title:

Relaxing XML Preference Queries for Cooperative Retrieval

Authors:

SungRan Cho and Wolf-Tilo Balke

Abstract: Today XML is an essential technology for knowledge management within enterprises and dissemination of data over the Web. Therefore the efficient evaluation of XML queries has been thoroughly researched. But given the ever growing amount of information available in different sources, also querying becomes more complex. In contrast to simple exact match retrieval, approximate matches become far more appropriate over collections of complex XML documents. Only recently approximate XML query processing has been pro-posed where structure and value are subject to necessary relaxations. All the possible query relaxations determined by the user's preferences are generated in a way that predicates are progressively relaxed until a suitable set of best possible results is retrieved. In this paper we present a novel framework for developing preference relaxations to the query permitting additional flexibility in order to fulfil a user’s wishes. We also design IPX, an interface for XML preference query processing, that enables users to express and formulate complex user pre-ferences, and provides a first solution for the aspects of XML preference query processing that allow preference querying and returning ranked answers.

Paper Nr: 461
Title:

DeXIN: An Extensible Framework for Distributed XQuery over Heterogeneous Data Sources

Authors:

Muhammad Intizar Ali, Reinhard Pichler, Hong Linh Truong and Schahram Dustdar

Abstract: In the Web environment, rich, diverse sources of heterogeneous and distributed data are ubiquitous. In fact, even the information characterizing a single entity - like, for example, the information related to a Web service - is normally scattered over various data sources using various languages such as XML, RDF, and OWL. Hence, there is a strong need for Web applications to handle queries over heterogeneous, autonomous, and distributed data sources. However, existing techniques do not provide sufficient support for this task. In this paper we present DeXIN, an extensible framework for providing integrated access over heterogeneous, autonomous, and distributed web data sources, which can be utilized for data integration in modern Web applications and Service Oriented Architecture. DeXIN extends the XQuery language by supporting SPARQL queries inside XQuery, thus facilitating the query of data modeled in XML, RDF, and OWL. DeXIN facilitates data integration in a distributed Web and Service Oriented environment by avoiding the transfer of large amounts of data to a central server for centralized data integration and exonerates the transformation of huge amount of data into a common format for integrated access.

Paper Nr: 496
Title:

Dimensional Templates in Data Warehouses: Automating the Multidimensional Design of Data Warehouse Prototypes

Authors:

Rui Oliveira, Fátima Rodrigues, Paulo Martins and João Paulo Moura

Abstract: Prototypes are valuable tools in Data Warehouse (DW) projects. DW prototypes can help end-users to get an accurate preview of a future DW system, along with its advantages and constraints. However, DW prototypes have considerably smaller development time windows when compared to complete DW projects. This puts additional pressure on the achievement of the expected prototypes' high quality standards, especially at the highly time consuming multidimensional design: in it, a thin margin for harmful unreflected decisions exists. Some devised methods for automating DW multidimensional design can be used to accelerate this stage, yet they are more suitable to DW full projects rather than to prototypes, due to the effort, cost and expertise they require. This paper proposes the semi-automation of DW multidimensional designs using templates. We believe this approach better fits the development speed and cost constraints of DW prototyping since templates are pre-built highly adaptable and highly reusable solutions.

Paper Nr: 560
Title:

Multiview Components for User-Aware Web Services

Authors:

Bouchra El Asri, Adil kenzi, Mahmoud Nassar, Abdelaziz Kriouile and Abdelaziz Barrahmoune

Abstract: Component based software (CBS) intends to meet the need of reusability and productivity. Web service technology leads to systems interoperability. This work addresses the development of CBS using web services technology. Undeniably, web service may interact with several types of service clients. The central problem is, therefore, how to handle the multidimensional aspect of service clients’ needs and requirements. To tackle this problem, we propose the concept of multiview component as a first class modelling entity that allows the capture of the various needs of service clients by separating their functional concerns. In this paper, we propose a model driven approach for the development of user-aware web services on the basis of the multiview component concept. So, we describe how multiview component based PIM are transformed into two PSMs for the purpose of the automatic generation of both the user-aware web services description and implementation. We specify transformations as a collection of transformation rules implemented using ATL as a model transformation language.

Paper Nr: 575
Title:

Knowledge based Query Processing in Large Scale Virtual Organizations

Authors:

Alexandra Pomares, Claudia Roncancio, José Abasolo and María del Pilar Villamil

Abstract: This work concerns query processing to support data sharing in large scale Virtual Organizations(VO). Characterization of VO’s data sharing contexts reflects the coexistence of factors like sources overlapping, uncertain data location, and fuzzy copies in dynamic large scale environments that hinder query processing. Existing results on distributed query evaluation are useful for VOs, but there is no appropriate solution combining high semantic level and dynamic large scale environments required by VOs. This paper proposes a characterization of VOs data sources, called Data Profile, and a query processing strategy (called QPro2e) for large scale VOs with complex data profiles. QPro2e uses an evolving distributed knowledge base describing data sources roles w.r.t shared domain concepts. It allows the identification of logical data source clusters which improve query evaluation in presence of a very large number of data sources.

Paper Nr: 597
Title:

Applying Recommendation Technology in OLAP Systems

Authors:

Houssem Jerbi, Franck Ravat, Olivier Teste and Gilles Zurfluh

Abstract: OLAP systems offering multidimensional and large information space cannot solely rely on standard navigation but need to apply recommendations to make the analysis process easy and to help users quickly find relevant data for decision-making. In this paper, we propose a recommendation methodology for assisting the user during his decision-support analysis. The system helps the user in querying multidimensional data and exposes him to the most interesting patterns, i.e. it provides to the user anticipatory as well as alternative decision-support data. We provide a preference-based approach to apply such methodology.

Paper Nr: 606
Title:

Classification and Prediction of Software Cost through Fuzzy Decision Trees

Authors:

Efi Papatheocharous and Andreas S. Andreou

Abstract: This work addresses the issue of software effort prediction via fuzzy decision trees generated using historical project data samples. Moreover, the effect that various numerical and nominal project characteristics used as predictors have on software development effort is investigated utilizing the classification rules extracted. The approach attempts to classify successfully past project data into homogeneous clusters to provide accurate and reliable cost estimates within each cluster. CHAID and CART algorithms are applied on approximately 1000 project cost data records which were analyzed, pre-processed and used for generating fuzzy decision tree instances, followed by an evaluation method assessing prediction accuracy achieved by the classification rules produced. Even though the experimentation follows a heuristic approach, the trees built were found to fit the data properly, while the predicted effort values approximate well the actual effort.

Paper Nr: 615
Title:

s-OLAP: Approximate OLAP Query Evaluation on Very Large Data Warehouses via Dimensionality Reduction and Probabilistic Synopses

Authors:

Alfredo Cuzzocrea

Abstract: In this paper, we propose s-OLAP, a framework for supporting approximate range query evaluation on data cubes that meaningfully makes use of two innovative perspectives of OLAP research, namely dimensionality reduction and probabilistic synopses. The application scenario of s-OLAP is a networked and heterogeneous very large Data Warehousing environment where applying traditional algorithms for processing OLAP queries is too much expensive and not convenient because of the size of data cubes, and the computational cost needed to access and process multidimensional data. s-OLAP relies on intelligent data representation and processing techniques, among which: (i) the amenity of exploiting the Karhunen-Loeve Transform (KLT) for obtaining dimensionality reduction of data cubes, and (ii) the definition of a probabilistic framework that allows us to provide a rigorous theoretical basis for ensuring probabilistic guarantees over the degree of approximation of the retrieved answers, which is a critical point in the context of approximate query answering techniques in OLAP.

Short Papers
Paper Nr: 112
Title:

EXPERIENCES OF ERP USE IN SMALL ENTERPRISES

Authors:

Paivi Iskanius, Raija Halonen and Matti Mottonen

Abstract: This paper investigates the role of Enterprise Resource Planning (ERP) systems in the context of small and medium size enterprises (SMEs). The paper reports on research findings from a case study that has been conducted in 14 SMEs, operating in steel manufacturing and woodworking. By dividing the enterprises into three different groups; medium-sized, small, and micro enterprises, this study provides a richer understanding of enterprise size related issues in motivations, risks and challenges of ERP adoption.

Paper Nr: 124
Title:

BUSINESS INTELLIGENCE BASED ON A WI-FI REAL TIME POSITIONING ENGINE - A Practical Application in a Major Retail Company

Authors:

Vasco Vinhas, Pedro Abreu and Pedro Mendes

Abstract: Collecting relevant data to perform business intelligence on a real time basis has always been a crucial objective for managers responsible for economic activities on large spaces. Following this emergent need, the authors propose a platform to perform data gathering and analysis on the location of people and assets by automatic means. The developed system is retail business oriented and has a fairly distributed architecture. It couples the core elements of a real-time Wi-Fi based location system with a set of developed functional views so to better explicit the information that one can observe for each tracked entity, the undertaken path on the space, demographic concentration patterns. Tests were conducted on a real production environment as a partnership outcome with a major player in the retail sector and the obtained results were completely satisfactory having the managers confirmed the provided knowledge relevance.

Paper Nr: 126
Title:

DIRECTED ACYCLIC GRAPHS AND DISJOINT CHAINS

Authors:

Yangjun Chen

Abstract: The problem of decomposing a DAG (directed acyclic graph) into a set of disjoint chains has many applications in data engineering. One of them is the compression of transi¬tive closures to support reachability queries on whether a given node v in a directed graph G is reachable from another node u through a path in G. Recently, an interesting algorithm is proposed by Chen et al. (Y. Chen and Y. Chen, 2008) which claims to be able to decompose G into a minimal set of dis¬joint chains in O(n2 + bn ) time, where n is the number of the nodes of G, and b is G’s width, defined to be the size of a largest node subset U of G such that for every pair of nodes u, v  U, there does not exist a path from u to v or from v to u. However, in some cases, it fails to do so. In this paper, we analyze this algorithm and show the problem. More importantly, a new algorithm is discussed, which can always find a minimal set of disjoint chains in the same time complexity as Chen’s.

Paper Nr: 152
Title:

AN OBJECT MODEL FOR THE MANAGEMENT OF DIGITAL IMAGES

Authors:

S. Khaddaj and Andreas Hoppe

Abstract: With digital image volumes rising dramatically there exists an important and urgent need for novel techniques and mechanisms that provide efficient storage and retrieval facilities of the voluminous data generated daily. It is already widely accepted that the use of data abstraction in object oriented modelling enables real world objects to be well represented in information systems. In this work we are particularly interested with the use of object oriented techniques for the management of digital images. Object orientation is well suited for such systems, which require the ability to handle multiple type content. This paper aims to investigate a conceptual model, based on object versioning techniques, which will represent the semantics in order to allow the continuity and pattern of changes of images to be determined over time.

Paper Nr: 155
Title:

A MAPREDUCE FRAMEWORK FOR CHANGE PROPAGATION IN GEOGRAPHIC DATABASES

Authors:

Ferdinando Di Martino, Salvatore Sessa, Giuseppe Polese and Mario Vacca

Abstract: Updating a schema is a very important activity which occurs naturally during the life cycle of database systems, due to different causes. A challenging problem arising when a schema evolves is the change propagation problem, i.e. the updating of the database ground instances to make them consistent with the evolved schema. Spatial datasets, a stored representation of geographical areas, are VLDBs and so the change propagation process, involving an enormous mass of data among geographical distributed nodes, is very expensive and call for efficient processing. Moreover, the problem of designing languages and tools for spatial data sets change propagation is relevant, for the shortage of tools for schema evolution, and, in particular, for the limitations of those for spatial data sets. In this paper, we take in account both efficiency and limitations and we propose an instance update language, based on the efficient and popular Map-Reduce Google programming paradigm, which allows to perform in a parallel way a wide category of schema changes. A system embodying the language has been implementing.

Paper Nr: 161
Title:

ESTABLISHING TRUST NETWORKS BASED ON DATA QUALITY CRITERIA FOR SELECTING DATA SUPPLIERS

Authors:

Ricardo P. del Castillo, Ismael Caballero, Ignacio García-Rodríguez, Macario Polo, Mario Piattini and Eugenio Verbo

Abstract: Nowadays, organizations may haveWeb portals tailoring several websites where a wide variety of information is integrated. These portals are typically composed of a set of Web applications and services that interchange data among them. In this setting, there is no way to find out how the quality of the interchanged data is going to evolve successively. A framework is proposed for establishing trust networks based on the Data Quality (DQ) levels of the interchanged data. We shall consider two kinds of DQ: inherent DQ and pragmatic DQ. Making a decision about the selection of the most suitable data supplier will be based on the estimation of the best expected pragmatic DQ levels. In addition, an example is presented to ilustrate framework operation.

Paper Nr: 168
Title:

ALGORITHMS FOR EFFICIENT TOP-K SPATIAL PREFERENCE QUERY EXECUTION IN A HETEROGENEOUS DISTRIBUTED ENVIRONMENT

Authors:

Marcin Gorawski and Kamil Dowlaszewicz

Abstract: Top-k spatial preference queries allow searching for objects on the basis of their neighbourhoods’ character. They find k objects whose neighbouring objects satisfy the query conditions to the greatest extent. The execution of the queries is complex and lengthy as it requires performing numerous accesses to index structures and data. Existing algorithms therefore employ various optimization techniques. The algorithms assume, however, that all data sets required to execute the query are aggregated in one location. In reality data is often distributed on remote nodes like for example data accumulated by different organizations. This motivated developing algorithm capable of efficiently executing the queries in a heterogeneous distributed environment. The paper describes the specifics of operating in such environment, presents the developed algorithm, describes the mechanisms it employs and discusses the results of conducted experiments.

Paper Nr: 185
Title:

AN INFORMATION SYSTEM FOR THE MANAGEMENT OF CHANGES DURING THE DESIGN OF BUILDING PROJECTS

Authors:

Essam Zaneldin

Abstract: Design is an important stage in a project's life cycle with the greatest impact on the overall performance and cost. For several reasons, changes introduced by design participants are imminent. Despite the importance of coordinating these changes among the different participants during the design stage, current practice exhibits severe information transfer problems. Since corrections to finalized designs or even designs at late stages in the process are extremely costly, it is less costly to spend the effort in managing changes and producing highly coordinated and easily constructible designs. To support this objective, this paper presents an information system with a built-in database for representing design information, including design rationale and history of changes, to support the management of changes during the design of building projects. The components of the system are discussed and possible future extensions to the present study are presented. This research is expected to help engineering and design-build firms to effectively manage design changes and produce better coordinated and constructible designs with less cost and time.

Paper Nr: 189
Title:

EFFICIENT SYSTEM INTEGRATION USING SEMANTIC REQUIREMENTS AND CAPABILITY MODELS - An Approach for Integrating Heterogeneous Business Services

Authors:

Thomas Moser, Richard Mordinyi, Stefan Biffl and Alexander Mikula

Abstract: Business system designers want to integrate heterogeneous legacy systems to provide flexible business ser-vices cheaper and faster. Unfortunately, modern integration technologies represent important integration knowledge only implicitly making solutions harder to understand, verify, and maintain. In this paper we propose a data-driven approach, “Semantically-Enabled Externalization of Knowledge” (SEEK), that explicitly models the semantics of integration requirements & capabilities, and data transformations between he-terogeneous legacy systems. Goal of SEEK is to make the systems integration process more efficient by providing tool support for quality assurance (QA) steps and generation of system configurations. Based on use cases from industry partners, we compare the SEEK approach with UML-based modeling. In the evalua-tion context SEEK was found to be more effective to make expert knowledge on system requirements and capabilities available for more efficient tool support and reuse.

Paper Nr: 197
Title:

SEMANTIC FRAMEWORK FOR INFORMATION INTEGRATION - Using Service-oriented Analysis and Design

Authors:

Prima Gustiené, Irina Peltomaa and Heli Helaakoski

Abstract: Today’s dynamic markets demand from companies’ new ways of thinking, adaptation of new technologies and more flexible production. These business drivers can be met effectively and efficiently only if people and enterprise resources, such as information systems collaborate together. The gap between organizational business aspects and information technology causes problems for companies to reach their goals. Information systems have increasingly important role in realization of business processes demands which leads to demand of close interaction and understanding between organizational and technical components. It is critical for enterprise interoperability, where semantic integration of information and technology is the prerequisite for successful collaboration. The paper presents a new semantic framework for better quality of semantic interoperability.

Paper Nr: 207
Title:

INJECTING SEMANTICS INTO EVENT-DRIVEN ARCHITECTURES

Authors:

Jürgen Dunkel, Alberto Fernández, Rubén Ortiz and Sascha Ossowski

Abstract: Event-driven architectures (EDA) have been proposed as a new architectural paradigm for event-based systems to process complex event streams. However, EDA have not yet reached the maturity of well-established software architectures because methodologies, models and standards are still missing. Despite the fact that EDA-based systems are essentially built on events, there is a lack of a general event modelling approach. In this paper we put forward a semantic approach to event modelling that is expressive enough to cover a broad variety of domains. Our approach is based on semantically rich event models using ontologies that allow the representation of structural properties of event types and constraints between them. Then, we argue in favour of a declarative approach to complex event processing that draws upon well established rule languages such as JESS and integrates the structural event model. We illustrate the adequacy of our approach with relation to a prototype for an event-based road traffic management system.

Paper Nr: 230
Title:

C3: A METAMODEL FOR ARCHITECTURE DESCRIPTION LANGUAGE BASED ON FIRST-ORDER CONNECTOR TYPES

Authors:

Abdelkrim Amirat and Mourad Oussalah

Abstract: To provide hierarchical description from different software architectural viewpoints we need more than one abstraction hierarchy and connection mechanisms to support the interactions among components. Also, these mechanisms will support the refinement and traceability of architectural elements through the different levels of each hierarchy. Current methods and tools provide poor support for the challenge posed by developing system using hierarchical description. This paper describes an architecture-centric approach allowing the user to describe the logical architecture view where a physical architecture view is generated automatically for all application instances of the logical architecture.

Paper Nr: 243
Title:

QUERY MELTING - A New Paradigm for GIS Multiple Query Optimization

Authors:

Haifa Elsidani Elariss, Souheil Khaddaj and Darrel Greenhill

Abstract: Recently, non-expert mobile-user applications have been developed to query Geographic Information Systems (GIS) particularly Location Based Services where users ask questions related to their position whether they are moving (dynamic) or not (static). A new Iconic Visual Query Language (IVQL) has been developed to handle proximity analysis queries that find k-nearest-neighbours and objects within a buffer area. Each operator in IVQL queries corresponds to an execution plan to be evaluated by the GIS server. Since commonalities exist between the execution plans, the same operations are executed many times leading to slow results. Hence, the need arises to develop a multi-user dynamic complex query optimizer that handles commonalities and processes the queries faster especially with the large-scale of mobile-users. We present a new query processor, a generic optimization framework for GIS and a middleware, which employs the new Query Melting paradigm (QM) that is based on the sharing paradigm and push-down optimization strategy. QM is implemented through a new Melting-Ruler strategy that works at the low-level, melts repetitions in plans to share spatial areas, temporal intervals, objects, intermediate results, maps, user locations, and functions, then re-orders them to get time-cost effective results, and is illustrated using a sample tourist GIS system.

Paper Nr: 261
Title:

MODELING WEB DOCUMENTS AS OBJECTS FOR AUTOMATIC WEB CONTENT EXTRACTION - Object-oriented Web Data Model

Authors:

Estella Annoni and C. I. Ezeife

Abstract: Traditionally, mining web page contents involves modeling their contents to discover the underlying knowledge. Data extraction proposals represent web data in a formal structure such as database structures specific to application domains. Those models fail to catch the full diversity of web data structures which can be composed of different types of contents, and can be also unstructured. In fact, with these proposals, it is not possible to focus on a given type of contents, to work on data of different structures and to mine on data of different application domains as required to mine efficiently a given content type or web documents from different domains. On top of that, since web pages are designed to be understood by users, this paper considers modeling of web document presentations expressed through HTML tag attributes as useful for an efficient web content mining. Hence, this paper provides a general framework composed of an object-oriented web data model based on HTML tags and algorithms for web content and web presentation object extraction from any given web document. From the HTML code of a web document, web objects are extracted for mining, regardless of the domain.

Paper Nr: 286
Title:

TOWARD A QUALITY MODEL FOR CBSE - Conceptual Model Proposal

Authors:

María A. Reyes, Maryoly Ortega, María Pérez, Anna Grimán, Luis E. Mendoza and Kenyer Domínguez

Abstract: In this paper, which is part of a research in progress, we analyze the conceptual elements behind Component-Based Software Engineering (CBSE) and propose a model that will support its quality evaluation. The conceptual model proposed integrates the product perspective, a view that includes components and Component-Based Software (CBS), as well as the process perspective, a view that represents the component and CBS development life cycle. The model proposal was developed under a systemic approach that will allow for assessing and improving products and processes immersed in CBSE. Future actions include proposing metrics to operationalize the model and validate them through a case study. The model application will allow studying the behavior of each perspective and the relationships among them.

Paper Nr: 298
Title:

OPTIMIZATION OF SPARQL BY USING CORESPARQL

Authors:

Jinghua Groppe, Sven Groppe and Jan Kolbaum

Abstract: SPARQL is becoming an important query language for RDF data. Query optimization to speed up query processing has been an important research topic for all query languages. In order to optimize SPARQL queries, we suggest a core fragment of the SPARQL language, which we call the coreSPARQL language. coreSPARQL has the same expressive power as SPARQL, but eliminates redundant language constructs of SPARQL. SPARQL engines and optimization approaches will benefit from using coreSPARQL, because fewer cases need to be considered when processing coreSPARQL queries and the coreSPARQL syntax is machine-friendly. In this paper, we present an approach to automatically transforming SPARQL to coreSPARQL, and develop a set of rewriting rules to optimize coreSPRQL queries. Our experimental results show that our optimization of SPARQL speeds up RDF querying.

Paper Nr: 311
Title:

FedDW: A TOOL FOR QUERYING FEDERATIONS OF DATA WAREHOUSES - Architecture, Use Case and Implementation

Authors:

Stefan Berger and Michael Schrefl

Abstract: Recently, Federated Data Warehouses – collections of autonomous and heterogeneous Data Marts – have become increasingly attractive as they enable the exchange of business information across organization boundaries. The advantage of federated architectures is that users may access the global, mediated schema with OLAP applications, while the Data Marts need not be changed and retain full autonomy. Although the underlying concepts are mature, tool support for Federated DWs has been poor so far. This paper presents the prototype of the “FedDW” Query Tool that supports distributed query processing in federations of ROLAP Data Marts. It acts as middleware component that reformulates user queries according to semantic correspondences between the autonomous Data Marts. We explain FedDW’s architecture, demonstrate a use-case and explain our implementation. We regard our proof-of-concept prototype as a first step towards the development of industrial strength query tools for DW federations.

Paper Nr: 318
Title:

AN USER-CENTRIC AND SEMANTIC-DRIVEN QUERY REWRITING OVER PROTEOMICS XML SOURCES

Authors:

Kunalè Kudagba, Omar El Beqqali and Hassan Badir

Abstract: Querying and sharing Web proteomics data is not an easy task. Given that, several data sources can be used to answer the same sub-goals in the Global query, it is obvious that we can have many candidates rewritings. The user-query is formulated using Concepts and Properties related to Proteomics research (Domain Ontology). Semantic mappings describe the contents of underlying sources. In this paper, we propose a characterization of query rewriting problem using semantic mappings as an associated hypergraph. Hence, the generation of candidates’ rewritings can be formulated as the discovery of minimal Transversals of an hypergraph. We exploit and adapt algorithms available in Hypergraph Theory to find all candidates rewritings from a query answering problem. Then, in future work, some relevant criteria could help to determine optimal and qualitative rewritings, according to user needs, and sources performances.

Paper Nr: 323
Title:

PSO-BASED RESOURCE SCHEDULING ALGORITHM FOR PARALLEL QUERY PROCESSING ON GRIDS.

Authors:

Arturo Pérez-Cebreros, Gilberto Martínez-Luna and Nareli Cruz-Cortés

Abstract: The accelerated development in Grid computing has positioned it as promising next generation computing platforms. Grid computing contains resource management, task scheduling, security problems, information management and so on. In the context of database query processing, existing parallelisation techniques can not operate well in Grid environments, because the way they select machines and allocate queries. This is due to the geographic distribution of resources that are owned by different organizations. The resource owners have different usage or access policies, cost models, varying loads and availability. It is a big challenge for efficient scheduling algorithm design and implementation. In this paper, a heuristic approach based on particle swarm optimization algorithm is adopted to solving parallel query scheduling problem in grid environment.

Paper Nr: 359
Title:

APPLYING INFORMATION RETRIEVAL FOR MARKET BASKET RECOMMENDER SYSTEMS

Authors:

Tapio Pitkaranta

Abstract: Coded data sets form the basis for many well known applications from healthcare prospective payment system to recommender systems in online shopping. Previous studies on coded data sets have introduced methods for the analysis of rather small data sets. This study proposes applying information retrieval methods for enabling high performance analysis of data masses that scale beyond traditional approaches. An essential component in today’s data warehouses to which coded data sets are collected is a database management system (DBMS). This study presents experimental results how information retrieval indexes scale and outperform common database schemas with a leading commercial DBMS engine in analysis of coded data sets. The results show that flexible analysis of hundreds of millions of coded data sets is possible with a regular desktop hardware.

Paper Nr: 372
Title:

SYMBOLIC EXECUTION FOR DYNAMIC, EVOLUTIONARY TEST DATA GENERATION

Authors:

Anastasis A. Sofokleous, Andreas S. Andreou and Antonis Kourras

Abstract: This paper combines the advantages of symbolic execution with search based testing to produce automatically test data for JAVA programs. A framework is proposed comprising two systems which collaborate to generate test data. The first system is a program analyser capable of performing dynamic and static program analysis. The program analyser creates the control flow graph of the source code under testing and uses a symbolic transformation to simplify the graph and generate paths as independent control flow graphs. The second system is a test data generator that aims to create a set of test cases for covering each path. The implementation details of the framework, as well as the relevant experiments carried out on a number of JAVA programs are presented. The experimental results demonstrate the efficiency and efficacy of the framework and show that it can outperform the performance of related approaches.

Paper Nr: 375
Title:

A BIT-SELECTOR TECHNIQUE FOR PERFORMANCE OPTIMIZATION OF DECISION-SUPPORT QUERIES

Authors:

Ricardo Jorge Santos and Jorge Bernardino

Abstract: Performance optimization of decision support queries has always been a major issue in data warehousing. A large amount of wide-ranging techniques have been used in research to overcome this problem. Bit-based techniques such as bitmap indexes and bitmap join indexes have been used and are generally accepted as standard common practice for optimizing data warehouses. These techniques are very promising due to their relatively low overhead and fast bitwise operations. In this paper, we propose a new technique which performs optimized row selection for decision support queries, introducing a bit-based attribute into the fact table. This attribute’s value for each row is set according to its relevance for processing each decision support query by using bitwise operations. Simply inserting a new column in the fact table’s structure and using bitwise operations for performing row selection makes it a simple and practical technique, which is easy to implement in any Database Management System. The experimental results, using benchmark TPC-H, demonstrates that it is an efficient optimization method which significantly improves query performance.

Paper Nr: 396
Title:

A DOMAIN SPECIFIC LANGUAGE FOR THE I* FRAMEWORK

Authors:

Carlos Nunes, João Araújo, Vasco Amaral and Carla Silva

Abstract: The i* framework proposes a goal-oriented analysis method for requirements engineering. It is a systematic approach to discover and structure requirements at organizational level where functional, non-functional requirements and their relations are specified. A Domain Specific Language (DSL) has the purpose to specify and model concepts in some domain, having several advantages in relation to general purpose languages, such as it allows expressing a solution in the desired language and at the desired abstraction level. In order to create such a DSL, normally it is necessary to start by specifying its syntax by means of a metamodel to be given as input to the language workbenches that generate the corresponding editors for it. With a proper editor for the language we can specify models with the proposed notation. This paper presents a DSL for the i* framework, with the purpose to handle complexity and scalability of its concrete models by introducing some innovations in the i* framework metamodel like mechanisms that will help to manage the models scalability.

Paper Nr: 460
Title:

EXTENDING THE UML-GEOFRAME DATA MODEL FOR CONCEPTUAL MODELING OF NETWORK APPLICATIONS

Authors:

Sergio Stempliuc, Jugurta Lisboa Filho, Marcus V. A. Andrade and Karla A. V. Borges

Abstract: This paper presents an extension of the UML-GeoFrame data model that includes a set of new constructors to allow the definition of conceptual schemas for spatial database applications whose elements relationship forms a network. Also, it is discussed how the GeoFrame conceptual framework is changed with the inclusion of new metaclasses and the corresponding stereotypes related to network elements. The extension proposed in this paper is evaluated using a class diagram for a water distribution company.

Paper Nr: 478
Title:

INTEGRATION METHOD AMONG BSC, CMMI AND SIX SIGMA USING GQM TO SUPPORT MEASUREMENT DEFINITION (MIBCIS)

Authors:

Leonardo Romeu, Jorge Audy and Andressa Covatti

Abstract: The software quality area has presented various studies and surveys in different fronts, either about products or processes. There are many initiatives in the area of software process improvement, which might be more than often conflicting in an organization. If we observe some of the existing models and methodologies in the market, the CMMI Model and the Six Sigma Methodology stand head and shoulders above the rest for being complemented. While CMMI focuses on organization and on process management and Six Sigma has its focus on the client and on the financial results, both highlight the importance of the data produced for decision making. This study presents a method for the integrated implementation of the CMMI Model and the Six Sigma Methodology for programs of process improvement, having as a backup measurement and assessment techniques such as the Balanced Scorecard (BSC) and the Goal-Question-Metric (GQM).

Paper Nr: 484
Title:

GISEIEL: AN OPEN SOURCE GIS FOR TERRITORIAL MANAGEMENT

Authors:

Pedro A. González, Miguel Lorenzo, Miguel R. Luaces, David Trillo and José Ignacio Lamas Fonte

Abstract: The provincial government A Coruña, in Spain, has been working on the last years in the construction of a geographic information system for the management of its territory. The result of this work are three software products: WebEIEL, gisEIEL and the ideAC node. WebEIEL is the web application that publishes the information on the Internet. gisEIEL is the desktop application that is used by the provincial government and the municipalities to create, query, visualize, analyze and update the information in the system. Finally, the ideAC node is a spatial data infrastructure that uses international standards to publish the information as a part of the Spanish spatial data infrastructure. In this paper, we describe the functionality and the architecture of the system and we present the problems that we had to face during the development of the system and the solutions that we applied.

Paper Nr: 492
Title:

ASSESSING WORKFLOW MANAGEMENT SYSTEMS - A Quantitative Analysis of a Worfklow Evaluation Model

Authors:

Stephan Poelmans and Hajo A. Reijers

Abstract: Despite the enormous interest in workflow management systems and their widespread adoption by industry, few research studies are available that empirically assess the effectiveness and acceptance of this technology. Our work exactly aims at providing such insights and this paper presents some of our preliminary quantitative findings. Using a theory-based workflow success model, we have studied the impact of operational workflow technologies on end-users in terms of perceived usefulness, end-user satisfaction and perceived organisational benefits. A survey instrument was used to gather a sample of 246 end-users from two different organizations. Our findings show that the considered workflow applications are generally accepted and positively evaluated. Using partial least squares analysis, the success model was well supported, making it a usefull instrument to evaluate future workflow projects.

Paper Nr: 504
Title:

EFFICIENT COMMUNITY MANAGEMENT IN AN INDUSTRIAL DESIGN ENGINEERING WIKI - Distributed Leadership

Authors:

Regine W. Vroom, Adrie Kooijman and Raymond Jelierse

Abstract: Industrial design engineers use a wide variety of research fields when making decisions that will eventually have significant impact on their designs. Obviously, designers cannot master every field, so they are therefore often looking for a simple set of rules of thumb on a particular subject. For this reason a wiki has been set up: www.wikid.eu. Whilst Wikipedia already offers a lot of this information, there is a distinct difference between WikID and Wikipedia; Wikipedia aims to be an encyclopaedia, and therefore tries to be as complete as possible. WikID aims to be a design tool. It offers information in a compact manner tailored to its user group, being the Industrial Designers. The main subjects of this paper are the research on how to create an efficient structure for the community of WikID and the creation of a tool for managing the community. With the new functionality for managing group memberships and viewing information on users, it will be easier to maintain the community. This will also help in creating a better community which will be more inviting to participate in, provided that the assumptions made in this area hold true.

Paper Nr: 509
Title:

IS THE APPLICATION OF ASPECT-ORIENTED PROGRAMMING CONSTRUCTS BENEFICIAL? - First Experimental Results

Authors:

Sebastian Kleinschmager and Stefan Hanenberg

Abstract: Aspect-oriented software development is an approach which addresses the construction of software artefacts which traditional software engineering constructs fail to modularize: the so-called crosscutting concerns. However, although aspect-orientation claims to permit a better modularization of crosscutting concerns, it is still not clear whether the application of aspect-oriented constructs has a measurable, positive impact on the construction of software artefacts. This paper addresses this issue by an empirical study which compares the specification of crosscutting concerns using traditional composition techniques and aspect-oriented composition techniques using the object-oriented programming language Java and the aspect-oriented programming language AspectJ.

Paper Nr: 528
Title:

INTRODUCING REAL-TIME BUSINESS CASE DATABASE - An Approach to Improve System Maintenance of Complex Application Landscapes

Authors:

Oliver Daute

Abstract: While system maintenance of single systems is under control nowadays, new challenges come up due to the use of linked up software applications in order to implement business scenarios. Numerous business processes exchange data across complex application landscapes, for that they use various applications and compute data. The technology underneath has to provide a stable environment maintaining diverse software, databases and operating system components. The challenge is to keep the application environment under control at any given time. The goal is to avoid incidents to business processes and to sustain the application landscape with regard to smaller and larger changes. For system maintenance of complex environments information about process run-states is indispensable, for example when parts of a system environment must be restored. This paper introduces the Real-Time Business Case Database (RT-BCDB) to control business processes and improve maintenance activities in complex application landscapes. It is about a concept, to gain more transparency and visibility of business processes activities. RT-BCDB stores information about business cases and theirs run-states continuously. Service frameworks such as IT Service Management (ITIL) can benefit of RT-BCDB as well.

Paper Nr: 535
Title:

AN EXTENSION OF ONTOLOGY BASED DATABASES TO HANDLE PREFERENCES

Authors:

Dilek Tapucu, Yamine Ait-Ameur, Stéphane Jean and Murat Osman Ünalir

Abstract: Ontologies have been defined to make explicit the semantics of data. With the emergence of the SemanticWeb, the amount of ontological data (or instances) available has increased. To manage such data, Ontology Based DataBases (OBDBs), that store ontologies and their instance data in the same repository have been proposed. These databases are associated with exploitation languages supporting description, querying, etc. on both ontologies and data. However, usually queries return a big amount of data that may be sorted in order to find the relevant ones. Moreover, in the current, few approaches considering user preferences when querying have been developed. Yet this problem is fundamental for many applications especially in the e-commerce domain. In this paper, we first propose an extension of an existing OBDB, called OntoDB through extension of their ontology model in order to support semantic description of preferences. Secondly, an extension of an ontology based query language, called OntoQL defined on OntoDB for querying ontological data with preferences is presented. Finally, an implementation of the proposed extensions are described.

Paper Nr: 544
Title:

A USER-DRIVEN AND A SEMANTIC-BASED ONTOLOGY MAPPING EVOLUTION APPROACH

Authors:

Hélio Martins and Nuno Silva

Abstract: Systems or software agents do not always agree on the information being shared, justifying the use of distinct ontologies for the same domain. For achieving interoperability, declarative mappings are used as a basis for exchanging information between systems. However, in dynamic environments like the Web and the Semantic Web, ontologies constantly evolve, potentially leading to invalid ontology mappings. This paper presents two approaches for managing ontology mapping evolution: a user-centric approach in which the user defines the mapping evolution strategies to be applied automatically by the system, and a semantic-based approach, in which the ontology’s evolution logs are exploited to capture the semantics of changes and then adapted to (and applied on at) the ontology mapping evolution process.

Paper Nr: 562
Title:

A SERVICE-BASED APPROACH FOR DATA INTEGRATION BASED ON BUSINESS PROCESS MODELS

Authors:

Hesley Py, Lucia Castro, Fernanda Araujo Baião and Asterio Tanaka

Abstract: Business-IT alignment is gaining more importance in enterprises, and is already considered essential for efficiently achieving enterprise goals. This led organizations to follow Enterprise Architecture approaches, with the Information Architecture as one of its pillars. Information architecture aims at providing an integrated and holistic view of the business information, and this requires applying a data integration approach. However, despite several works on data integration research, the problem is far from being solved. The highly heterogeneous computer environments present new challenges such as distinct DBMSs, distinct data models, distinct schemas and distinct semantics, all in the same scenario. On the other hand, new issues in enterprise environment, such as the emergence of BPM and SOA approaches, contribute to a new solution for the problem. This paper presents a service-based approach for data integration, in which the services are derived from the organization’s business process models. The proposed approach comprises a framework of different types of services (data services, concept services), a method for data integration service identification from process models, and a metaschema needed for the automation and customization of the proposed approach in a specific organization. We focus on handling heterogeneities with regard to different DBMSs and differences among data models, schemas and semantics.

Paper Nr: 567
Title:

AUTOMATIC DERIVATION OF SPRING-OSGI BASED WEB ENTERPRISE APPLICATIONS

Authors:

Elder Cirilo, Uirá Kuleza and Carlos Lucena

Abstract: Component-based technologies (CBTs) are nowadays widely adopted in the development of different kinds of applications. They provide functionalities to facilitate the management of the application components and their different configurations. Spring and OSGi are two relevant examples of CBTs in the mainstream scenario. In this paper, we explore the use of Spring/OSGi technologies in the context of automatic product derivation. We illustrated through a typical web-based enterprise application: (i) how different models of a feature-based product derivation tool can be automatically generated based on the configuration files of Spring and OSGi, and Java annotations; and (ii) how the different abstractions provided by these CBTs can be related to a feature model with the aim to automatically derive an Spring/OSGi based application or product line.

Paper Nr: 585
Title:

GROUNDING AND MAKING SENSE OF AGILE SOFTWARE DEVELOPMENT

Authors:

Mark Woodman and Aboubakr A. Moteleb

Abstract: The paper explores areas of strategic frameworks for sense-making, knowledge management and Grounded Theory methodologies to offer a rationalization of some aspects of agile software development. In a variety of projects where knowledge management form part of the solution we have begun to see activities and principles that closely correspond to many aspects of the wide family of agile development methods. We offer reflection on why as a community we are attracted to agile methods and consider why they work.

Paper Nr: 633
Title:

KEYMANTIC: A KEYWORD-BASED SEARCH ENGINE USING STRUCTURAL KNOWLEDGE

Authors:

Francesco Guerra, Sonia Bergamaschi, Mirko Orsini, Antonio Sala and Claudio Sartori

Abstract: Traditional techniques for query formulation need the knowledge of the database contents, i.e. which data are stored in the data source and how they are represented. In this paper, we discuss the development of a keyword-based search engine for structured data sources. The idea is to couple the ease of use and flexibility of keyword-based search with metadata extracted from data schemata and extensional knowledge which constitute a semantic network of knowledge. Translating keywords into SQL statements, we will develop a search engine that is effective, semantic-based, and applicable also when instance are not continuously available, such as in integrated data sources or in data sources extracted from the deep web.

Paper Nr: 641
Title:

ESpace - Web-scale Integration One Step at a Time

Authors:

Kajal Claypool, Jeremy Mineweaser, Dan Van Hook, Michael Scarito and Elke Rundensteiner

Abstract: In this paper, we take the position that a flexible and agile integration infrastructure that harmoniously and transparently oscillates between and supports different levels of integration – loose or partial integration on one end of the spectrum and tight or full integration on the other end of the spectrum – is essential for achieving large Web scale integration. Furthermore, domain knowledge provided by users/domain experts is essential for improving the quality of integration between resources. We posit Web 2.0 or “social Web” technologies, can be brought to bear to facilitate implicit user-driven, web-scale integration at different levels. In this paper, we present ESpace, a prototype for a pay-as-you-go integration framework that supports loosely to tightly integrated resources within the same infrastructure, where loose integration is supported in the sense of pulling resources on the web together, based on the tag meta-information associated with them, and tight integration is a representation of classic schema-matching based integration techniques. This is but the first step in enabling web-scale pay-as-you-go integration by providing fine-grained analysis and integrating substructures within resources – achieving tighter integration for select resources on the user’s behest.

Posters
Paper Nr: 52
Title:

GENERIC APPROACH TO AUTOMATIC INDEX UPDATING IN OODBMS

Authors:

Tomasz M. Kowalski, Kamil Kuliberda, Jacek Wiślicki and Radosław Adamus

Abstract: In this paper, we describe a robust approach to the problem of the automatic index updating, i.e. maintaining cohesion between data and indices. Introducing object-oriented notions (classes, inheritance, polymorphism, class methods, etc.) in databases allows defining more complex selection predicates; nevertheless, in order to facilitate selection process through indices, index updating requires substantial revising. Inadequate index maintenance can lead to serious errors in query processing what has been shown on the example of Oracle 11g ORDBMS. The authors work is based on the Stack-Based Architecture (SBA) and has been implemented and tested in the ODRA (Object Database for Rapid Applications development) OODBMS prototype.

Paper Nr: 127
Title:

SEMI-SUPERVISED INFORMATION EXTRACTION FROM VARIABLE-LENGTHWEB-PAGE LISTS

Authors:

Daniel Nikovski, Alan Esenther and Akihiro Baba

Abstract: We propose two methods for constructing automated programs for extraction of information from a class of web pages that are very common and of high practical significance — variable-length lists of records with identical structure. Whereas most existing methods would require multiple example instances of the target web page in order to be able to construct extraction rules, our algorithms require only a single example instance. The first method analyzes the document object model (DOM) tree of the web page to identify repeatable structure that includes all of the specified data fields of interest. The second method provides an interactive way of discovering the list node of the DOMtree by visualizing the correspondence between portions of XPath expressions and visual elements in the web page. Both methods construct extraction rules in the form of XPath expressions, facilitating ease of deployment and integration with other information systems.

Paper Nr: 157
Title:

TOWARDS A COMMON PUBLIC SERVICE INFRASTRUCTURE FOR SWISS UNIVERSITIES

Authors:

Florian Schnabel, Eva Bucherer and Uwe Heck

Abstract: Due to the Bologna Declaration and the according procedures of performance management and output funding universities are undergoing organisational changes both within and across the universities. The need for an appropriate organisational structure and for efficient and effective processes makes the support through a correspondent IT essential. The IT environment of Swiss universities is currently dominated by a high level of decentralisation and a high degree of proprietary solutions. Economies of scale through joint development or shared services remain untapped. Also the increasingly essential integration of applications to support either university-internal or cross-organizational processes is hindered. In this paper we propose an approach for a comprehensive service-oriented architecture for swiss universities to overcome the current situation and to cope with organizational and technical challenges. We further present an application scenario revealing how Swiss universities will benefit of the proposed architecture.

Paper Nr: 200
Title:

AN ARCHITECTURE FOR THE RAPID DEVELOPMENT OF XML-BASED WEB APPLICATIONS

Authors:

José Paulo Leal and Jorge Braz Gonalves

Abstract: Our research goal is the generation of working web applications from high level specifications. Based on our experience in using XML transformations for that purpose, we applied this approach to the rapid development of database management applications. The result is an architecture that defines of a web application as a set of XML transformations, and generates these transformations using second order transformations from a database schema. We used the Model-View-Controller architectural pattern to assign different roles to transformations, and defined a pipeline of transformations to process an HTTP request. The definition of these transformations is based on a correspondence between data-oriented XML Schema definitions and the Entity-Relationship model. Using this correspondence we were able produce transformations that implement database operations, forms interfaces generators and application controllers, as well as the second order transformations that produce all of them. This paper includes also a description of a RAD system following this architecture that allowed us to perform a critical evaluation of this proposal.

Paper Nr: 211
Title:

ASSESSING DATABASES IN .NET - Comparing Approaches

Authors:

Daniela da Cruz and Pedro Rangel Henriques

Abstract: Language-Integrated Query (LINQ) appeared recently as the new language of the .NET framework — is the new kid of the town. This query-language, an extension to C# and Visual Basic, allows the query expressions to benefit from the features previously available only to imperative code — the rich metadata, IntelliSense, compile-time syntax checking, and static typing. In this paper, we intend to compare the methods provided by .NET to query databases (LINQ, SQL and Object). This comparison will be done in terms of performance and in terms of the approach used. To guide this comparison, a running-example will be used.

Paper Nr: 246
Title:

AUTOMATIC DETECTION OF DUPLICATED ATTRIBUTES IN ONTOLOGY

Authors:

Irina Astrova and Arne Koschel

Abstract: Semantic heterogeneity is the ambiguous interpretation of terms describing the meaning of data in heterogeneous data sources such as databases. This is a well-known problem in data integration. A recent solution to this problem is to use ontologies, which is called ontology-based data integration. However, ontologies can contain duplicated attributes, which can lead to improper integration results. This paper proposes a novel approach that analyzes a workload of queries over an ontology to automatically calculate (semantic) distances between attributes, which are then used for duplicate detection.

Paper Nr: 268
Title:

EFFICIENTLY LOCATING WEB SERVICES USING A SEQUENCE-BASED SCHEMA MATCHING APPROACH

Authors:

Alsayed Algergawy, Eike Schallehn and Gunter Saake

Abstract: Locating desiredWeb services has become a challenging research problem due to the vast number of available Web services within an organization and on the Web. This necessitates the need for developing flexible, effective, and efficient Web service discovery frameworks. To this purpose, both the semantic description and the structure information of Web services should be exploited in an efficient manner. This paper presents a flexible and efficient service discovery approach, which is based on the use of the Pr¨ufer encoding method to construct a one-to-one correspondence between Web services and sequence representations. In this paper, we describe and experimentally evaluate our Web service discovery approach.

Paper Nr: 328
Title:

A FLEXIBLE EVENT-CONDITION-ACTION (ECA) RULE PROCESSING MECHANISM BASED ON A DYNAMICALLY RECONFIGURABLE STRUCTURE

Authors:

Xiang Li, Ying Qiao and Hongan Wang

Abstract: Adding and deleting Event-Condition-Action (ECA) rules, i.e. modifications of processing structures in an active database are expected to happen on-the-fly, to cause minimum impact on existing processing procedures of ECA rules. In this paper, we present a flexible ECA rule processing mechanism in active database. It uses a dynamically reconfigurable structure, called unit-mail graph (UMG) and a middleware, called Unit Modification and Management Layer (UMML) to localize the impact of adding and deleting ECA rules so as to support on-the-fly rule modification. The ECA rule processing mechanism can continue to work when the user adds or deletes the rules. This makes active database to be able to react to external events arriving at the system during rule modification. We also use a smart home environment to evaluate our work.

Paper Nr: 477
Title:

AN MDA APPROACH FOR OBJECT-RELATIONAL MAPPING

Authors:

Cătălin Strîmbei and Marin Fotache

Abstract: This paper reviews several emergent approaches that attempt to capitalize on SQL data “engineering” standard in current <object>-to-<object relational> mapping methodologies. As a particular contribution we will discuss a slightly different OR mapping approach, based on ORDBMS extension mechanisms that allow to publish new data structures as Abstract Data Types (ADT).

Paper Nr: 483
Title:

INNOVATIVE PROCESS EXECUTION IN SERVICE-ORIENTED ENVIRONMENTS

Authors:

Dirk Habich, Steffen Preissler, Hannes Voigt and Wolfgang Lehner

Abstract: Today’s information systems are often built on the foundation of service-oriented environments. Although the fundamental purpose of an information system is the processing of data and information, the service-oriented architecture (SOA) does not treat data as a core first class citizen. Current SOA technologies support neither the explicit modeling of data flows in common business process modeling languages (such as BPMN) nor the usage of specialized data transformation and propagation technologies (for instance ETL-tools) on the process execution layer (BPEL). In this paper, we introduce our data-aware approach on the execution perspective as well as on the modeling perspective of business processes.

Paper Nr: 630
Title:

TISM - A Tool for Information Systems Management

Authors:

António Trigo, João Barroso and João Varajão

Abstract: The complexity of Information Technology and Information Systems within organizations keeps growing rapidly. As a result, the work of the Chief Information Officer is becoming increasingly difficult, since he has to manage multiple technologies and perform several activities of different nature. In this position paper, we prove the development of a new tool for Chief Information Officers, which will systematize and aggregate the enterprise Information Systems Function information.

Area 2 - Artificial Intelligence and Decision Support Systems

Full Papers
Paper Nr: 85
Title:

A Self-learning System for Object Categorization

Authors:

Danil V. Prokhorov

Abstract: We propose a learning system for object categorization which utilizes information from multiple sensors. The system learns not only prior to its deployment in a supervised mode but also in a self-learning mode. A competition based neural network learning algorithm is used to distinguish between representations of different categories. We illustrate the system application on an example of image categorization. A radar guides a selection of candidate images provided by the camera for subsequent analysis by our learning method. Radar information gets coupled with navigational information for improved localization of objects during self-learning.

Paper Nr: 88
Title:

A Self-tuning of Membership Functions for Medical Diagnosis

Authors:

Nuanwan Soonthornphisaj and Pattarawadee Teawtechadecha

Abstract: In this paper, a self-tuning of membership functions for fuzzy logic is proposed for medical diagnosis. Our algorithm uses decision tree as a tool to generate three kinds of membership functions which are triangular, bell shape and Gaussian curve. The system can automatically select the best form of membership function for the classification process that can provide the best classification result. The advantage of our system is that it doesn’t need the expert to create membership functions for each feature. But the system can create various membership functions using learning algorithm that learns from the training set. In some domains, user can provide prior knowledge that can be used to enhance the performance of the classifier. However, in medical domain, we found that some diseases are difficult to diagnose. It would not be a problem if that disease has been completely explored in medical area. In order to rule out the patient, we need a domain expert to provide the membership functions for many attributes obtained from the laboratory test. Since the disease has not been completely explored in medical area, the membership function provided by the expert might be biased and lead to the poor classification performance. The performance of our proposed algorithm has been investigated on 2 medical data sets. The experimental results show that our approach can effectively enhance the classification performance compare to neural networks and the traditional fuzzy logic.

Paper Nr: 128
Title:

Insolvency Prediction of Irish Companies using Backpropagation and Fuzzy ARTMAP Neural Networks

Authors:

Anatoli Nachev, Seamus Hill and Borislav Stoyanov

Abstract: This study explores experimentally the potential of BPNNs and Fuzzy ARTMAP neural networks to predict insolvency of Irish firms. We used financial information for Irish companies for a period of six years, preprocessed properly in order to be used with neural networks. Prediction results show that with certain network parameters the Fuzzy ARTMAP model outperforms BPNN. It outperforms also self-organising feature maps as reported by other studies that use the same dataset. Accuracy of predictions was validated by ROC analysis, AUC metrics, and leave-one-out cross-validation.

Paper Nr: 135
Title:

Frequent Subgraph-based Approach for Classifying Vietnamese Text Documents

Authors:

Tu Anh Hoang Nguyen and Kiem Hoang

Abstract: In this paper we present a simple approach for Vietnamese text classification without word segmentation, based on frequent subgraph mining techniques. A graph-based instead of traditional vector-based model is used for document representation. The classification model employs structural patterns (subgraphs) and Dice measure of similarity to identify a class of documents. This method is evaluated on Vietnamese data set for measuring classification accuracy. Results show that it can outperform k-NN algorithm (based on vector, hybrid document representation) in terms of accuracy and classification time.

Paper Nr: 173
Title:

Random Projection Ensemble Classifiers

Authors:

Alon Schclar and Lior Rokach

Abstract: We introduce a novel ensemble model based on random projections. The contribution of using random projections is two-fold. First, the randomness provides the diversity which is required for the construction of an ensemble model. Second, random projections embed the original set into a space of lower dimension while preserving the dataset's geometrical structure to a given distortion. This reduces the computational complexity of the model construction as well as the complexity of the classification. Furthermore, dimensionality reduction removes noisy features from the data and also represents the information which is inherent in the raw data by using a small number of features. The noise removal increases the accuracy of the classifier. The proposed scheme was tested using WEKA based procedures that were applied to 16 benchmark dataset from the UCI repository.

Paper Nr: 188
Title:

Knowledge Reuse in Data Mining Projects and its Practical Applications

Authors:

Rodrigo Cunha, Paulo Adeodato and Silvio Meira

Abstract: The objective of this paper is providing an integrated environment for knowledge reuse in KDD, for preventing recurrence of known errors and reinforcing project successes, based on previous experience. It combines methodologies from project management, data warehousing, mining and knowledge representation. Different from purely algorithmic papers, this one focuses on performance metrics used for managerial such as the time taken for solution development, the amount of files not automatically managed and other, while preserving equivalent performance on the technical solution quality metrics. This environment has been validated with metadata collected from previous KDD projects developed and deployed for real world applications by the development team members. The case study carried out in actual contracted projects have shown that this environment assesses the risk of failure for new projects, controls and documents all the KDD project development process and helps understanding the conditions that lead KDD projects to success or failure.

Paper Nr: 231
Title:

Enhancing Text Clustering Performance using Semantic Similarity

Authors:

Walaa K. Gad and Mohamed S. Kamel

Abstract: Text documents clustering can be challenging due to complex linguistics properties of the text documents. Most of clustering techniques are based on traditional bag of words to represent the documents. In such document representation, ambiguity, synonymy and semantic similarities may not be captured using traditional text mining techniques that are based on words and/or phrases frequencies in the text. In this paper, we propose a semantic similarity based model to capture the semantic of the text. The proposed model in conjunction with lexical ontology solves the synonyms and hypernyms problems. It utilizes WordNet as an ontology and uses the adapted Lesk algorithm to examine and extract the relationships between terms. The proposed model reflects the relationships by the semantic weighs added to the term frequency weight to represent the semantic similarity between terms. Experiments using the proposed semantic similarity based model in text clustering are conducted. The obtained results show promising performance improvements compared to the traditional vector space model as well as other existing methods that include semantic similarity measures in text clustering.

Paper Nr: 233
Title:

Stereo Matching using Synchronous Hopfield Neural Network

Authors:

Te-Hsiu Sun

Abstract: Deriving depth information has been an important issue in computer vision. In this area, stereo vision is an important technique for 3D information acquisition. This paper presents a scnaline-based stereo matching technique using synchronous Hopfield neural networks (SHNN). Feature points are extracted and selected using the Sobel operator and a user-defined threshold for a pair of scanned images. Then, the scanline-based stereo matching problem is formulated as an optimization task where an energy function, including dissimilarity, continuity, disparity and uniqueness mapping properties, is minimized. Finally, the incorrect matches are eliminated by applying a false target removing rule. The proposed method is verified with an experiment using several commonly used stereo images. The experimental results show that the proposed method solves effectively the stereo matching problem and is applicable to various areas.

Paper Nr: 245
Title:

Monotonic Monitoring of Discrete-event Systems with Uncertain Temporal Observations

Authors:

Gianfranco Lamperti and Marina Zanella

Abstract: In discrete-event system monitoring, the observation is fragmented over time and a set of candidate diagnoses is output at the reception of each fragment (so as to allow for possible control and recovery actions). When the observation is uncertain (typically, a DAG with partial temporal ordering) a problem arises about the significance of the monitoring output: two sets of diagnoses, relevant to two consecutive observation fragments, may be unrelated to one another, and, even worse, they may be unrelated to the actual diagnosis. To cope with this problem, the notion of monotonic monitoring is introduced, which is supported by specific constraints on the fragmentation of the uncertain temporal observation, leading to the notion of stratification. The paper shows that only under stratified observations can significant monitoring results be guaranteed.

Paper Nr: 250
Title:

A Service Composition Framework for Decision Making under Uncertainty

Authors:

Malak Al-Nory, Alexander Brodsky and Hadon Nash

Abstract: Proposed and developed is a service composition framework for decision-making under uncertainty, which is applicable to stochastic optimization of supply chains. Also developed is a library of modeling components which include Scenario, Random Environment, and Stochastic Service. Service models are classes in the Java programming language extended with decision variables, assertions, and business objective constructs. The constructor of a stochastic service formulates a recourse stochastic program and finds the optimal instantiation of real values into the service initial and corrective decision variables leading to the optimal business objective. The optimization is not done by repeated simulation runs, but rather by automatic compilation of the simulation model in Java into a mathematical programming model in AMPL and solving it using an external solver.

Paper Nr: 295
Title:

A Multi-Criteria Resource Selection Method for Software Projects using Fuzzy Logic

Authors:

Daniel Antonio Callegari and Ricardo M. Bastos

Abstract: When planning a software project, we must assign resources to tasks. Resource selection is a fundamental step to resource allocation since we first need to find the most suitable candidates for each task before deciding who will actually perform them. In order to rank available resources, we have to evaluate their skills and define the corresponding selection criteria for the tasks. While being the choice of many approaches, representing skill levels by means of ordinal scales and defining selection criteria using binary operations imply some limitations. Pure mathematical approaches are difficult to model and suffer from a partial loss in meaning in terms of knowledge representation. Fuzzy Logic, as an extension to classical sets and logic, uses linguistic variables and a continuous range of truth values for decision and set membership. It allows handling inherent uncertainties in this process, while hiding the complexity from the final user. In this paper we show how Fuzzy Logic can be applied to the resource selection problem. A prototype was built to demonstrate and evaluate the results.

Paper Nr: 382
Title:

An Optimized Hybrid Kohonen Neural Network for Ambiguity Detection in Cluster Analysis using Simulated Annealing

Authors:

E. Mohebi and M. N. M. Sap

Abstract: One of the popular tools in the exploratory phase of Data mining and Pattern Recognition is the Kohonen Self Organizing Map (SOM). The SOM maps the input space into a 2-dimensional grid and forms clusters. Recently experiments represented that to catch the ambiguity involved in cluster analysis, it is not necessary to have crisp boundaries in some clustering operations. In this paper to overcome the ambiguity involved in cluster analysis, a combination of Rough set Theory and Simulated Annealing is proposed that has been applied on the output grid of SOM. Experiments show that the proposed two-stage algorithm, first using SOM to produce the prototypes then applying rough set and SA in the second stage in order to assign the overlapped data to true clusters they belong to, outperforms the proposed crisp clustering algorithms (i.e. I-SOM) and reduces the errors.

Paper Nr: 441
Title:

Interactive Quality Analysis in the Automotive Industry: Concept and Design of an Interactive, Web-based Data Mining Application

Authors:

Steffen Fritzsche, Markus Mueller and Carsten Lanquillon

Abstract: In this paper we present an interactive, web-based data mining application that supports quality analysis in the automotive industry. Our tool is designed to help automotive engineers in their task of identifying the root cause of quality issues. Knowing what exactly caused a problem and identifying vehicles that are most likely to be affected by the issue, helps in planning and implementing effective service actions. We show how data mining can be applied in the given application domain, point out the key role of interactivity and propose an appropriate software architecture.

Paper Nr: 466
Title:

NARFO Algorithm: Mining Non-redundant and Generalized Association Rules based on Fuzzy Ontologies

Authors:

Rafael Garcia Miani, Cristiane A. Yaguinuma, Marilde T. P. Santos and Mauro Biajiz

Abstract: Traditional approaches for mining generalized association rules are based only on database contents, and focus on exact matches among items. However, in many applications, the use of some background knowledge, as ontologies, can enhance the discovery process and generate semantically richer rules. In this way, this paper proposes the NARFO algorithm, a new algorithm for mining non-redundant and generalized association rules based on fuzzy ontologies. Fuzzy ontology is used as background knowledge, to support the discovery process and the generation of rules. One contribution of this work is the generalization of non-frequent itemsets that helps to extract important and meaningful knowledge. NARFO algorithm also contributes at post-processing stage with its generalization and redundancy treatment. Our experiments showed that the number of rules had been reduced considerably, without redundancy, obtaining 63.63% average reduction in comparison with XSSDM algorithm.

Paper Nr: 516
Title:

Automated Construction of Process Goal Trees from EPC-Models to Facilitate Extraction of Process Patterns

Authors:

Andreas Bögl, Michael Schrefl, Gustav Pomberger and Norbert Weber

Abstract: A system that enables reuse of process solutions should be able to retrieve “common” or “best practice” pattern solutions (common modelling practices) from existing process descriptions for a certain business goal. A manual extraction of common modelling practices is labour-intensive, tedious and cumbersome. This paper presents an approach for an automated extraction of process goals from Event-driven Process Chains (EPC) and its annotation to EPC functions and events. In order to facilitate goal reasoning for the identification of common modelling practices an algorithm (G-Tree-Construction) is proposed that constructs a hierarchical goal tree.

Short Papers
Paper Nr: 58
Title:

AUTOMATIC INFORMATION PROCESSING AND UNDERSTANDING IN COGNITIVE BUSINESS SYSTEMS

Authors:

Ryszard Tadeusiewicz, Marek R. Ogiela and Lidia Ogiela

Abstract: The concept of new generation in area of information systems is automatic understanding systems (AUS) to the attention of the computer sciences community as a new possibility for the systems analysis and design. The novelty of this new idea is in the previously used method of automatic understanding in the area of medical image analysis, classification and interpretation, to a more general and needed area of systems analysis. The concept of the AUS systems approach is, in essence, different from other approaches such as, for example, those based on neural networks, pattern analysis, image interpretation or machine learning. AUS enables the determination of the meaning of analysed data, both numeric and descriptive. Cognitive methods, on which the AUS concept and construct are based, have roots in the psychological and neurophysiological processes of understanding and describing analysed data as they take place in the human brain.

Paper Nr: 60
Title:

DETECTING DOMESTIC VIOLENCE - Showcasing a Knowledge Browser based on Formal Concept Analysis and Emergent Self Organizing Maps

Authors:

Paul Elzinga, Jonas Poelmans, Stijn Viaene and Guido Dedene

Abstract: Over 90% of the case data from police inquiries is stored as unstructured text in police databases. We use the combination of Formal Concept Analysis and Emergent Self Organizing Maps for exploring a dataset of unstructured police reports out of the Amsterdam-Amstelland police region in the Netherlands. In this paper, we specifically aim at making the reader familiar with how we used these two tools for browsing the dataset and how we discovered useful patterns for labelling cases as domestic or as non-domestic violence.

Paper Nr: 68
Title:

USING QUALITY COSTS IN A MULTI-AGENT SYSTEM FOR AN AIRLINE OPERATIONS CONTROL

Authors:

Antonio J. M. Castro and Eugenio Oliveira

Abstract: The Airline Operations Control Centre (AOCC) tries to solve unexpected problems that might occur during the airline operation. Problems related to aircrafts, crewmembers and passengers are common and the actions towards the solution of these problems are usually known as operations recovery. Usually, the AOCC tries to minimize the operational costs while satisfying all the required rules. In this paper we present the implementation of a Distributed Multi-Agent System (MAS) representing the existing roles in an AOCC. This MAS has several specialized software agents that implement different algorithms, competing to find the best solution for each problem that include not only operational costs but, also, quality costs so that passenger satisfaction can be considered in the final decision. We present a real case study where a crew recovery problem is solved. We show that it is possible to find valid solutions, with better passenger satisfaction and, in certain conditions, without increasing significantly the operational costs.

Paper Nr: 120
Title:

FREQUENCY ASSIGNMENT OPTIMIZATION USING THE SWARM INTELLIGENCE MULTI-AGENT BASED ALGORITHM (SIMBA)

Authors:

Grant Blaise O'Reilly

Abstract: The swarm intelligence multi-agent based algorithm (SIMBA) is presented in this paper. The SIMBA utilizes swarm intelligence and a multi-agent system (MAS) to optimize the frequency assignment problem (FAP). The SIMBA optimises by considering both local and global i.e. collective solutions in the optimization process. Stigmergy single cell optimization (SSCO) is also used by the individual agents in SIMBA. SSCO enables the agents to recognize interference patterns in the frequency assignment structure that is being optimized and to augment it with frequency selections that minimized the interference. The changing configurations of the frequency assignment structure acts as a source of information that aids the agents when making further decisions. Due to the increasing demand of cellular communication services and the available frequency spectrum optimal frequency assignment is necessary. The SIMBA was used to optimize the fixed-spectrum frequency assignment problem (FS-FAP) in cellular radio networks. The results produced by the SIMBA were benchmarked against the COST 259 Siemens scenarios. The frequency assignment solutions produced by the SIMBA were also implemented in a commercial cellular radio network and the results are presented.

Paper Nr: 121
Title:

A NEW HEURISTIC FUNCTION IN ANT-MINER APPROACH

Authors:

Urszula Boryczka and Jan Kozak

Abstract: In this paper, a novel rule discovery system that utilizes the Ant Colony Optimization (ACO) is presented. The ACO is a metaheuristic inspired by the behavior of real ants, where they search optimal solutions by considering both local heuristic and previous knowledge, observed by pheromone changes. In our approach we want to ensure the good performance of Ant-Miner by applying the new versions of heuristic functions in a main rule. We want to emphasize the role of the heuristic function by analyzing the influence of different propositions of these functions to the performance of Ant-Miner. The comparative study will be done using the 5 data sets from the UCI Machine Learning repository.

Paper Nr: 129
Title:

FORMULATING ASPECTS OF PAYPAL IN THE LOGIC FRAMEWORK OF GBMF

Authors:

Min Li and Chris J. Hogger

Abstract: Logic-based modelling methods can benefit business organizations in constructing models offering flexible knowledge representation supported by correct and effective inference. It remains a continuing research issue as to how best to apply logic-based formalization to informal/semi-formal business modelling. In this paper, we formulate aspects of the general business specification of PayPal in logic programming by applying this in logic-based GBMF which is a declarative, context-independent, implementable and highly expressive framework for modelling high-level aspects of business. In particular, we introduce the primary PayPal business concepts and relations; specify simple but essential PayPal business processes associated with a knowledge base, and set core business rules and controls to simulate the PayPal case in a fully automatic manner. This specific modelling method gives the advantages of general-purpose expressiveness and well-understood execution regimes, avoiding the need for a special-purpose engine supporting a specialized modelling language.

Paper Nr: 133
Title:

AN AGENT-BASED SYSTEM FOR HEALTHCARE PROCESS MANAGEMENT

Authors:

Bian Wu, Minhong Wang and Hongmin Yun

Abstract: An effective approach for healthcare process management is the key to delivery of high-quality services in healthcare. An agent-based and process-oriented system is presented in this study to facilitate dynamic and interactive processes in healthcare environment. The system is developed in three layers: the agent layer for healthcare process management, the database layer for maintenance of medical records and knowledge, and the interface layer for human-computer interaction. The treatment of primary open angle glaucoma is used as an example to demonstrate the effectiveness of approach.

Paper Nr: 136
Title:

AOI BASED NEUROFUZZY SYSTEM TO EVALUATE SOLDER JOINT QUALITY

Authors:

G. Acciani, G. Brunetti, G. Fornarelli, A. Giaquinto and D. Maiullari

Abstract: Surface Mount Technology is extensively used in the production of Printed Circuit Boards due to the high level of density in the electronic device integration. In such production process several defects could occur on the final electronic components, compromising their correct working. In this paper a neurofuzzy solution to process information deriving from an automatic optical system is proposed. The designed solution provides a Quality Index of a solder joint, by reproducing the modus operandi of an expert and making it automatic. Moreover, the considered solution presents some attractive advantages: a complex acquisition system is not needed, reducing the equipment costs and shifting the assessment of a solder joint on the fuzzy parts. Finally, the typical low computational costs of the fuzzy systems could satisfy urgent time constrains in the in-line detection of some industrial productive processes.

Paper Nr: 145
Title:

AN ORDER CLUSTERING SYSTEM USING ART2 NEURAL NETWORK AND PARTICLE SWARM OPTIMIZATION METHODN

Authors:

R. J. Kuo, M. J. Wang, T. W. Huang and Tung-Lai Hu

Abstract: Surface mount technology (SMT) production system set up is quite time consuming for industrial personal computers (PC) because of high level of customization. Therefore, this study intends to propose a novel two-stage clustering algorithm for grouping the orders together before scheduling in order to reduce the SMT setup time. The first stage first uses the adaptive resonance theory 2 (ART2) neural network for finding the number of clusters and then feed the results to the second stage, which uses particle swarm K-means optimization (PSKO) algorithm. An internationally well-known industrial PC manufacturer provided the related evaluation information. The results show that the proposed clustering method outperforms other three clustering algorithms. Through order clustering, scheduling products belonging to the same cluster together can reduce the production time and the machine idle time.

Paper Nr: 162
Title:

USING UML CLASS DIAGRAM AS A KNOWLEDGE ENGINEERING TOOL

Authors:

Thomas Raimbault, David Genest and Stéphane Loiseau

Abstract: UML class diagram is the de facto standard, including in Knowledge Engineering, for modeling structural knowledge of systems. Attaching importance to visual representation and based on a previous work, where we have given a logical defined extension of UML class diagram to represent queries and constraints into the UML visual environment, we present here how using the model of conceptuals graphs to answer queries and to check constraints in concrete terms.

Paper Nr: 167
Title:

K-ANNOTATIONS - An Approach for Conceptual Knowledge Implementation using Metadata Annotations

Authors:

Eduardo S. E. Castro, Mara Abel and R. Tom Price

Abstract: A number of Knowledge Engineering methodologies have been proposed during the last decades. These methodologies use different languages for knowledge modelling. As most of these languages are based on logic, knowledge models defined using theses languages cannot be easily converted to the Object-Oriented (OO) paradigm. This brings a relevant problem to the development phase of KS projects: several complex knowledge systems are developed using OO languages. So, even if the conceptual model can be modelled using the logical paradigm, it is important to provide a standard knowledge representation with the OO paradigm. This paper introduces the k-annotations, an approach for conceptual knowledge implementation using metadata annotations and the aspect oriented paradigm. The proposed approach allows the development of the conceptual model using the OO paradigm and it establishes a standard path to implement this model. The main goal of the approach is to provide ways to reuse both the knowledge design and related programming code of the model based on a single model representation.

Paper Nr: 170
Title:

ANT PAGERANK ALGORITHM

Authors:

Mahmoud Zaki Abdo, Manal Ahmed Ismail and Mohamed Ibraheem Eladawy

Abstract: The amount of global information in the World Wide Web is growing at an incredible rate. Millions of results are returned from search engines. The rank of pages in the search engines is very important. One of the basic rank algorithms is PageRank algorithm. This paper proposes an enhancement of PageRank algorithm to speed up the computational process. The enhancement of PageRank algorithm depends on using the Ant algorithm. On average, this technique yields about 7.5 out of ten relevant pages to the query topic, and the total time reduced by 19.9 %.

Paper Nr: 176
Title:

STUDY ON IMAGE CLASSIFICATION BASED ON SVM AND THE FUSION OF MULTIPLE FEATURES

Authors:

Dequan Zheng, Tiejun Zhao, Sheng Li and Yufeng Li

Abstract: In this paper, an adaptive feature-weight adjusted image classification method is proposed, which is based on the SVM and the fusion of multiple features. Firstly, classifier was separately constructed for each image feature, then automatically learn the weight coefficient of each feature by training data set and the classifiers constructed. At last, a complexity classifier is created by combining the separate classifier and the corresponding weight coefficient. The experiment result showed that our scheme improved the performance of image classification and had adaptive ability comparing with general approach. Moreover, the scheme has certain robustness because of avoiding the impact brought by various dimension of each feature.

Paper Nr: 183
Title:

A FUZZY-GUIDED GENETIC ALGORITHM FOR QUALITY ENHANCEMENT IN THE SUPPLY CHAIN

Authors:

Cassandra X. H. Tang and Henry C. W. Lau

Abstract: To respond to the globalization and fierce competition, manufacturers gradually realize the challenge of demanding customers who strongly seek for products of high-quality and low-cost, which implicitly calls for the quality improvement of the products in a cost-effective way. Traditional methods focused on specified process optimization for quality enhancement instead of emphasizing the organizational collaboration to ensure qualitative performance. This paper introduces artificial intelligence (AI) approach to attain quality enhancement by automating the selection of process parameters within the supply chain. The originality of this research is providing an optimal configuration of process parameters along the supply chain and delivering qualified outputs to raise customer satisfaction.

Paper Nr: 186
Title:

OPTIMUM DCT COMPRESSION OF MEDICAL IMAGES USING NEURAL NETWORKS

Authors:

Adnan Khashman and Kamil Dimililer

Abstract: Medical imaging requires storage of large quantities of digitized data Efficient storage and transmission of medical images in telemedicine is of utmost importance however. Due to the constrained bandwidth and storage capacity, a medical image must be compressed before transmission or storage. An ideal image compression system must yield high quality compressed images with high compression ratio; this can be achieved using DCT-based image compression, however the contents of the image affects the choice of an optimum compression ratio. In this paper, a neural network is trained to relate the x-ray image contents to their optimum compression ratio. Once trained, the optimum DCT compression ratio of the x-ray image can be chosen upon presenting the image to the network. Experimental results suggest that out proposed system, can be efficiently used to compress x-rays while maintaining high image quality.

Paper Nr: 210
Title:

A MINING FRAMEWORK TO DETECT NON-TECHNICAL LOSSES IN POWER UTILITIES

Authors:

Félix Biscarri, Iñigo Monedero, Carlos León, Juan I. Guerrero, Jesús Biscarri and Rocío Millán

Abstract: This paper deals with the characterization of customers in power companies in order to detect consumption Non-Technical Losses (NTL). A new framework is presented, to find relevant knowledge about the particular characteristics of the electric power customers. The authors uses two innovative statistical estimators to weigh variability and trend of the customer consumption. The final classification model is presented by a rule set, based on discovering association rules in the data. The work is illustrated by a case study considering a real data base.

Paper Nr: 216
Title:

INTELLIGENT SURVEILLANCE FOR TRAJECTORY ANALYSIS - Detecting Anomalous Situations in Monitored Environments

Authors:

J. Albusac, J. J. Castro-Schez, L. Jimenez-Linares, D. Vallejo and L. M. Lopez-Lopez

Abstract: Recently, there is a growing interest in the development and deployment of intelligent surveillance systems capable of finding out and analyzing simple and complex events that take place on scenes monitored by cameras. Within this context, the use of expert knowledge may offer a realistic solution when dealing with the design of a surveillance system. In this paper, we briefly describe the architecture of an intelligent surveillance system based on normality components and expert knowledge. These components specify how a certain object must ideally behave according to one concept. A specific normality component which analyzes the trajectories followed by objects is studied in depth in order to analyze behaviors in an outdoor environment. The analysis of trajectories in the surveillance context is an interesting issue because any moving object has always a goal in an environment, and it usually goes towards one destination to achieve it.

Paper Nr: 234
Title:

USING GRA FOR 2D INVARIANT OBJECT RECOGNITION

Authors:

T.-H. Sun, J. C. Liu, C.-H. Tang and F.-C. Tien

Abstract: Invariant features are vital to domain of pattern recognition. This research develops a vision-based invariant recognizer for 2D object. We perform a recognition method which adopted KRA invariant feature extractor and used grey relational analysis. The feature extraction is to derive translation, rotation, and scaling-free features through the sequential boundary and is described with its K-curvature. Our work represents the object profile with the K-curvature to obtain the position invariant property; and then the transformation of autocorrelation is to ensure orientation-invariant property. Experimental also reveals that proposed method with either GRA or MD methods offers distinctiveness and effectiveness for part recognition.

Paper Nr: 244
Title:

AN INVESTIGATION INTO DYNAMIC CUSTOMER REQUIREMENT USING COMPUTATIONAL INTELLIGENCE

Authors:

Yih Tng Chong and Chun-Hsien Chen

Abstract: The twenty-first century is marked by fast evolution of customer tastes and needs. Research has shown that customer requirements could vary in the temporal space between product conceptualisation and market introduction. In markets characterized by fast changing consumer needs, products generated might often not fit the consumer needs as the companies have originally expected. This paper advocates the proactive management and analysis of the dynamic customer requirements in bid to lower the risk inherent in developing products for fast shifting markets. A customer requirements analysis and forecast (CRAF) system that can address the issue is introduced in this paper. Computational intelligence methodologies, viz. artificial immune system and artificial neural network, are found to be potential techniques in handling and analysing dynamic customer requirements. The investigation aims to support product development functions in the pursuit of generating products for near future markets.

Paper Nr: 247
Title:

THE ROLE OF DATA MINING TECHNIQUES IN EMERGENCY MANAGEMENT

Authors:

Ning Chen and An Chen

Abstract: Emergency management is becoming more and more attractive in both theory and practice due to the frequently occurring incidents in the world. The objective of emergency management is to make optimal decisions to decrease or diminish harm caused by incidents. Nowadays the overwhelming amount of information leads to a great need of effective data analysis for the purpose of well informed decision. The potential of data mining has been demonstrated through the success of decision-making module in present-day emergency management systems. In this paper, we review advanced data mining techniques applied in emergency management and indicate some promising future research directions.

Paper Nr: 252
Title:

MINING BOTH MAXIMAL AND CLOSED EMBEDDED UNORDERED TREES

Authors:

Tarek F. Gharib

Abstract: Tree mining have a wide area of applications because of the usage of tree structure in many data representation formats like DNA, XML databases, computer networks simulation, and pattern recognition. In this paper we present an efficient approach to discover all the closed and maximal frequent embedded and unordered sub-trees. The experiments with both synthetic and real datasets against the known algorithms for mining closed embedded and unordered trees, demonstrate the effectiveness and the efficiency of the technique.

Paper Nr: 266
Title:

A DECISION SUPPORT SYSTEM FOR MULTI-PLANT ASSEMBLY SEQUENCE PLANNING USING A PSO APPROACH

Authors:

Yuan-Jye Tseng, Jian-Yu Chen and Feng-Yi Huang

Abstract: In a multi-plant collaborative manufacturing system in a global logistics chain, a product can be manufactured and assembled at different plants located at various locations. In this research, a decision support system for multi-plant assembly sequence planning is presented. The multi-plant assembly sequence planning model integrates two tasks, assembly sequence planning and plant assignment. In assembly sequence planning, the components and assembly operations are sequenced according to the operational constraints and precedence constraints to achieve assembly cost objectives. In plant assignment, the components and assembly operations are assigned to the suitable plants under the constraints of plant capabilities to achieve multi-plant cost objectives. A particle swarm optimization (PSO) solution approach is presented by encoding a particle using a position matrix defined by the numbers of components and plants. The PSO algorithm simultaneously performs assembly sequence planning and plant assignment with an objective of minimizing the total of assembly operational costs and multi-plant costs. The main contribution lies in the new multi-plant assembly sequence planning model and the new PSO solution method. The test results show that the presented method is feasible and efficient for solving the multi-plant assembly sequence planning problem. In this paper, an example product is tested and illustrated.

Paper Nr: 294
Title:

TERM WEIGHTING: NOVEL FUZZY LOGIC BASED METHOD VS. CLASSICAL TF-IDF METHOD FOR WEB INFORMATION EXTRACTION

Authors:

Jorge Ropero, Ariel Gómez, Carlos León and Alejandro Carrasco

Abstract: Solving Term Weighting problem is one of the most important tasks for Information Retrieval and Information Extraction. Tipically, the TF-IDF method have been widely used for determining the weight of a term. In this paper, we propose a novel alternative fuzzy logic based method. The main advantage for the proposed method is the obtention of better results, especially in terms of extracting not only the most suitable information but also related information. This method will be used for the design of a Web Intelligent Agent which will soon start to work for the University of Seville web page.

Paper Nr: 297
Title:

DECISION SUPPORT SYSTEM FOR CLASSIFICATION OF NATURAL RISK IN MARITIME CONSTRUCTION

Authors:

Marco Antonio García Tamargo, Alfredo S. Alguero García, Víctor Castro Amigo, Amelia Bilbao Terol and Andrés Alonso Quintanilla

Abstract: The objective of this paper is the prevention of workplace hazards in maritime works – ports, drilling and others – that may arise from the natural surroundings: tides, wind, visibility, rain and so on. On the basis of both historical and predicted data in certain variables, a system has been designed that uses data mining techniques to provide prior decision-making support as to whether to execute given work on a particular day. The system also yields a numerical evaluation of the risk of performing the activity according to the additional circumstances affecting it: the number of workers and the machinery involved, the estimated monetary cost of an accident and so on.

Paper Nr: 300
Title:

BUILDING TAILORED ONTOLOGIES FROM VERY LARGE KNOWLEDGE RESOURCES

Authors:

Victoria Nebot and Rafael Berlanga

Abstract: Nowadays very large domain knowledge resources are being developed in domains like Biomedicine. Users and applications can benefit enormously from these repositories in very different tasks, such as visualization, vocabulary homogenizing and classification. However, due to their large size and lack of formal semantics, they cannot be properly managed and exploited. Instead, it is necessary to provide small and useful logic-based ontologies from these large knowledge resource so that they become manageable and the user can take benefit from the semantics encoded. In this work we present a novel framework for efficiently indexing and generating ontologies according to the user requirements. Moreover, the generated ontologies are encoded using OWL logic-based axioms so that ontologies are provided with reasoning capabilities. Such a framework relies on an interval labeling scheme that efficiently manages the transitive relationships present in the domain knowledge resources. We have evaluated the proposed framework over the Unified Medical Language System (UMLS). Results show very good performance and scalability, demonstrating the applicability of the proposed framework in real scenarios.

Paper Nr: 315
Title:

A PROJECTION-BASED HYBRID SEQUENTIAL PATTERNS MINING ALGORITHM

Authors:

Chichang Jou

Abstract: Sequential pattern mining finds frequently occurring patterns of item sequences from serial orders of items in the transaction database. The set of frequent hybrid sequential patterns obtained by previous researches either is incomplete or does not scale with growing database sizes. We design and implement a Projection-based Hybrid Sequential PAttern Mining algorithm, PHSPAM, to remedy these problems. PHSPAM first builds Supplemented Frequent One Sequence itemset to collect items that may appear in frequent hybrid sequential patterns. The mining procedure is then performed recursively in the pattern growth manner to calculate the support of patterns through projected position arrays, projected support arrays, and projected databases. We compare the results and performances of PHSPAM with those of other hybrid sequential pattern mining algorithms, GFP2 and CHSPAM.

Paper Nr: 316
Title:

THE SIGNING OF A PROFESSIONAL ATHLETE - Reducing Uncertainty with a Weighted Mean Hemimetric for Φ - Fuzzy Subsets

Authors:

Julio Rojas-Mora and Jaime Gil-Lafuente

Abstract: In this paper we present a tool to help reduce the uncertainty presented in the decision-making process associated to the selection and hiring of a professional athlete. A weighted mean hemimetric for Φ-fuzzy subsets with trapezoidal fuzzy numbers (TrFN) as their elements, allows to compare candidates to the "ideal'' player that the technical body of a team believes should be hired.

Paper Nr: 342
Title:

GRAPH STRUCTURE LEARNING FOR TASK ORDERING

Authors:

Yiming Yang, Abhimanyu Lad, Henry Shu, Bryan Kisiel, Chad Cumby, Rayid Ghani and Katharina Probst

Abstract: In many practical applications, multiple interrelated tasks must be accomplished sequentially through user interaction with retrieval, classification and recommendation systems. The ordering of the tasks may have a significant impact on the overall utility (or performance) of the systems; hence optimal ordering of tasks is desirable. However, manual specification of optimal ordering is often difficult when task dependencies are complex, and exhaustive search for the optimal order is computationally intractable when the number of tasks is large. We propose a novel approach to this problem by using a directed graph to represent partial-order preferences among task pairs, and using link analysis (HITS and PageRank) over the graph as a heuristic to order tasks based on how important they are in reinforcing and propagating the ordering preference. These strategies allow us to find near-optimal solutions with efficient computation, scalable to large applications. We conducted a comparative evaluation of the proposed approach on a form-filling application involving a large collection of business proposals from the Accenture Consulting & Technology Company, using SVM classifiers to recommend keywords, collaborators, customers, technical categories and other related fillers for multiple fields in each proposal. With the proposed approach we obtained near-optimal task orders that improved the utility of the recommendation system by 27% in macro-averaged F1, and 13% in micro-averaged F1, compared to the results obtained using arbitrarily chosen orders, and that were competitive against the best order suggested by domain experts.

Paper Nr: 361
Title:

KEY CHARACTERISTICS IN SELECTING SOFTWARE TOOLS FOR KNOWLEDGE MANAGEMENT

Authors:

Hanlie Smuts, Alta van der Merwe and Marianne Loock

Abstract: The shift to knowledge as the primary source of value results in the new economy being led by those who manage knowledge effectively. Today’s organisations are creating and leveraging knowledge, data and information at an unprecedented pace – a phenomenon that makes the use of technology not an option, but a necessity. Software tools in knowledge management are a collection of technologies and are not necessarily acquired as a single software solution. Furthermore, these knowledge management software tools have the advantage of using the organisation’s existing information technology infrastructure. Organisations and business decision makers spend a great deal of resources and make significant investments in the latest technology, systems and infrastructure to support knowledge management. It is imperative that these investments are validated properly, made wisely and that the most appropriate technologies and software tools are selected or combined to facilitate knowledge management. In this paper, we propose a set of characteristics that should support decision makers in the selection of software tools for knowledge management. These characteristics were derived from both in-depth interviews and existing theory in publications.

Paper Nr: 387
Title:

TOWARDS A SEMANTIC SYSTEM FOR MANAGING CLINICAL PROCESSES

Authors:

Ermelinda Oro and Massimo Ruffolo

Abstract: Managing costs and risks is an high priority theme for health care professionals and providers. A promising approach for reducing costs and risks, and enhancing patient safety, is the definition of process-oriented clinical information systems. In the area of health care information systems, a number of systems and approaches to medical knowledge and clinical processes representation and management are available. But no systems that provide integrated approaches to both declarative and procedural medical knowledge are currently available. In this work a clinical process management system aimed at supporting a semantic process-centered vision of health care practices is described. The system is founded on an ontology-based clinical knowledge representation framework that allows representing and managing, in a unified way, both medical knowledge and clinical processes. The system provides functionalities for: (i) designing clinical processes by exploiting already existing and ad-hoc medical ontologies and guideline base; (ii) executing clinical processes and monitoring their evolution by adopting alerting techniques that aid to prevent risks and errors; (iii) analyzing clinical processes by semantic querying and data mining techniques for making available decision support features able to contain risks and to enhance cost control and patient safety.

Paper Nr: 393
Title:

MINING PATTERNS IN THE PRESENCE OF DOMAIN KNOWLEDGE

Authors:

Cláudia Antunes

Abstract: One of the main difficulties of pattern mining is to deal with items of different nature in the same itemset, which can occur in any domain except basket analysis. Indeed, if we consider the analysis of any transactional database composed by several entities and relationships, it is easy to understand that the equality function may be different for each element, which difficult the identification of frequent patterns. This situation is just one example of the need for using domain knowledge to manage the discovery process, but several other, no less important can be enumerated, such the need to consider patterns at higher levels of abstraction or the ability to deal with structured data. In this paper, we show how the Onto4AR framework can be explored to overcome these situations in a natural way, illustrating its use in the analysis of two distinct case studies. In the first one, exploring a cinematographic dataset, we capture patterns that characterize kinds of movies in accordance to the actors present in their casts and their roles. In the second one, identifying molecular fragments, we find structured patterns, including chains, rings and stars.

Paper Nr: 433
Title:

COORDINATION IN MULTI-AGENT DECISION SUPPORT SYSTEM - Application to a Boiler Combustion Management System (GLZ)

Authors:

Noria Taghezout, Abdelkader Adla and Pascale Zaraté

Abstract: In Multi-agents systems, the cognitive capability present in an agent can be deployed to realize effective problem-solving by the combined effort of the system and the user. It offers the potential to automate a far wider part of the problem solving task than was possible with classical DSS. In this paper, we propose to integrate agents in a group decision support system. The resulting system, MADS is designed to support operators during contingencies. We experiment our system on a case of boiler breakdown to detect a functioning defect of the boiler (GLZ: Gas Liquefying Zone) to diagnose the defect and to suggest one or several appropriate cure actions. In MADS the communication support enhances communication and coordination capabilities of participants. A simple scenario is given, to illustrate the feasibility of the proposal.

Paper Nr: 469
Title:

USER-DRIVEN ASSOCIATION RULE MINING USING A LOCAL ALGORITHM

Authors:

Claudia Marinica, Andrei Olaru and Fabrice Guillet

Abstract: One of the main issues in the process of Knowledge Discovery in Databases is the Mining of Association Rules. Although a great variety of pattern mining algorithms have been designed to this purpose, their main problems rely on in the large number of extracted rules, that need to be filtered in a post-processing step resulting in fewer but more interesting results. In this paper we suggest a new algorithm, that allows the user to explore the rules space locally and incrementally. The user interests and preferences are represented by means of the new proposed formalism - the Rule Schemas. The method has been successfully tested on the database provided by Nantes Habitat.

Paper Nr: 491
Title:

MONITORING COOPERATIVE BUSINESS CONTRACTS IN AN INSTITUTIONAL ENVIRONMENT

Authors:

Henrique Lopes Cardoso and Eugénio Oliveira

Abstract: The automation of B2B processes is currently a hot research topic. In particular, multi-agent systems have been used to address this arena, where agents can represent enterprises in an interaction environment, automating tasks such as contract negotiation and enactment. Contract monitoring tools are becoming more important as the level of automation of business relationships increase. When business is seen as a joint activity that aims at pursuing a common goal, the successful execution of the contract benefits all involved parties, and thus each of them should try to facilitate the compliance of their partners. Taking into account these concerns and inspecting international legislation over trade procedures, in this paper we present an approach to model contractual obligations: obligations are directed from bearers to counterparties and have flexible deadlines. We formalize the semantics of such obligations using temporal logic, and we provide rules that allow for monitoring them. The proposed implementation is based on a rule-based forward chaining production system.

Paper Nr: 494
Title:

A SIMULATION-BASED METHODOLOGY TO ASSIST DECISION-MAKERS IN REAL VEHICLE ROUTING PROBLEMS

Authors:

Angel A. Juan, Daniel Riera, David Masip, Josep Jorba and Javier Faulin

Abstract: The aim of this work is to present a simulation-based algorithm that not only provides a competitive solution for instances of the Capacitated Vehicle Routing Problem (CVRP), but is also able to efficiently generate a full database of alternative good solutions with different characteristics. These characteristics are related to solution’s properties such as routes’ attractiveness, load balancing, non-tangible costs, fuzzy preferences, etc. This double-goal approach can be specially interesting for the decision-maker, since he/she can make use of this algorithm to construct a database of solutions and then send queries to it in order to obtain those feasible solutions that better fit his/her utility function without incurring in a severe increase in costs. In order to provide high-quality solutions, our algorithm combines a CVRP classical heuristic, the Clarke and Wright Savings method, with Monte Carlo simulation using state-of-the-art random number generators. The resulting algorithm is tested against some well known benchmarks and the results obtained so far are promising enough to encourage future developments and improvements on the algorithm and its applications in real-life scenarios.

Paper Nr: 511
Title:

A LOGIC PROGRAMMING FRAMEWORK FOR LEARNING BY IMITATION

Authors:

Grazia Bombini, Nicola Di Mauro, Teresa M. A. Basile, Stefano Ferilli and Floriana Esposito

Abstract: Humans use imitation as a mechanism for acquiring knowledge, i.e. they use instructions and/or demonstrations provided by other humans. In this paper we propose a logic programming framework for learning from imitation in order to make an agent able to learn from relational demonstrations. In particular, demonstrations are received in incremental way and used as training examples while the agent interacts in a stochastic environment. This logical framework allows to represent domain specific knowledge as well as to compactly and declaratively represent complex relational processes. The framework has been implemented and validated with experiments in simulated agent domains.

Paper Nr: 523
Title:

I.M.P.A.K.T.: An Innovative Semantic-based Skill Management System Exploiting Standard SQL

Authors:

Eufemia Tinelli, Antonio Cascone, Michele Ruta, Tommaso Di Noia, Eugenio Di Sciascio and Francesco M. Donini

Abstract: The paper presents I.M.P.A.K.T. (Information Management and Processing with the Aid of Knowledge-based Technologies), a semantic-enabled platform for skills and talent management. In spite of the full exploitation of recent advances in semantic technologies, the proposed system only relies on standard SQL queries. Distinguishing features include: the possibility to express both strict requirements and preferences in the requested profile, a logic-based ranking of retrieved candidates and the explanation of rank results.

Paper Nr: 524
Title:

TOWARDS A UNIFIED STRATEGY FOR THE PREPROCESSING STEP IN DATA MINING

Authors:

Camelia Vidrighin Bratu and Rodica Potolea

Abstract: Data-related issues represent the main obstacle in obtaining a high quality data mining process. Existing strategies for preprocessing the available data usually focus on a single aspect, such as incompleteness, or dimensionality, or filtering out “harmful” attributes, etc. In this paper we propose a unified methodology for data preprocessing, which considers several aspects at the same time. The novelty of the approach consists in enhancing the data imputation step with information from the feature selection step, and performing both operations jointly, as two phases in the same activity. The methodology performs data imputation only on the attributes which are optimal for the class (from the feature selection point of view). Imputation is performed using machine learning methods. When imputing values for a given attribute, the optimal subset (of features) for that attribute is considered. The methodology is not restricted to the use of a particular technique, but can be applied using any existing data imputation and feature selection methods.

Paper Nr: 541
Title:

SEMANTIC ARGUMENTATION IN DYNAMIC ENVIRONMENTS

Authors:

Jörn Sprado and Björn Gottfried

Abstract: Decision Support Systems play a crucial role when controversial points of views are to be considered in order to make decisions. In this paper we outline a framework for argumentation and decision support. This framework defines arguments which refer to conceptual descriptions of the given state of affairs. Based on their meaning and based on preferences that adopt specific viewpoints, it is possible to determine consistent positions depending on these viewpoints. We investigate our approach by examining soccer games, since many observed spatiotemporal behaviours in soccer can be interpreted differently. Hence, the soccer domain is particularly suitable for investigating spatiotemporal decision support systems.

Paper Nr: 555
Title:

HYBRID OPTIMIZATION TECHNIQUE FOR ARTIFICIAL NEURAL NETWORKS DESIGN

Authors:

Cleber Zanchettin and Teresa B. Ludermir

Abstract: In this paper a global and local optimization method is presented. This method is based on the integration of the heuristic Simulated Annealing, Tabu Search, Genetic Algorithms and Backpropagation. The performance of the method is investigated in the optimization of Multi-layer Perceptron artificial neural network architecture and weights. The heuristics perform the search in a constructive way and based on the pruning of irrelevant connections among the network nodes. Experiments demonstrated that the method can also be used for relevant feature selection. Experiments are performed with four classification and one prediction datasets.

Paper Nr: 576
Title:

ESTIMATING GREENHOUSE GAS EMISSIONS USING COMPUTATIONAL INTELLIGENCE

Authors:

Joaquim Augusto Pinto Rodrigues, Luiz Biondi Neto, Pedro Henrique Gouvêa Coelho and João Carlos Correia Baptista Soares de Mello

Abstract: This work proposes a Neuro-Fuzzy Intelligent System – ANFIS (Adaptive Network based Fuzzy Inference System) for the annual forecast of greenhouse gases emissions (GHG) into the atmosphere. The purpose of this work is to apply a Neuro-Fuzzy System for annual GHG forecasting based on existing emissions data including the last 37 years in Brazil. Such emissions concern tCO2 (tons of carbon dioxide) resulting from fossil fuels consumption for energetic purposes, as well as those related to changes in the use of land, obtained from deforestation indexes. Economical and population growth index have been considered too. The system modeling took into account the definition of the input parameters for the forecast of the GHG measured in terms of tons of CO2. Three input variables have been used to estimate the total tCO2 one year ahead emissions. The ANFIS Neuro-Fuzzy Intelligent System is a hybrid system that enables learning capability in a Fuzzy inference system to model non-linear and complex processes in a vague information environment. The results indicate the Neural-Fuzzy System produces consistent estimates validated by actual test data.

Paper Nr: 629
Title:

INTEGRATING AGENTS WITH CONNECTIONIST SYSTEMS TO EXTRACT NEGOTIATION ROUTINES

Authors:

Marisa Masvoula, Panagiotis Kanellis and Drakoulis Martakos

Abstract: Routinization is a technique of knowledge exploitation based on the repetition of acts. When applied to negotiations it results the substitution of parts or even whole processes, disembarrassing negotiators from significant deliberation and decision making effort. Although it has an important impact on negotiators, the risk of establishing ineffective routines is evident. In our paper we discuss weaknesses and limitations and we propose a generic framework to address them. We consider routines as evolving processes and we take two orientations. The first concerns a communicative dimension to allow for external evaluation of the applied routines and the second concerns enforcement of the system core with evolving structure that adjusts to routine changes and flexibly incorporates new knowledge.

Posters
Paper Nr: 22
Title:

A NEW CASE-BASED APPROXIMATE REASONING BASED ON SPMF IN LINGUISTIC APPROXIMATION

Authors:

Dae-Young Choi and Ilkyeun Ra

Abstract: A new case-based approximate reasoning (CBAR) based on SPMF in linguistic approximation is proposed. It provides an efficient mechanism for linguistic approximation within linear time complexity.

Paper Nr: 37
Title:

SOCIAL ROBOTS, MORAL EMOTIONS

Authors:

Ana R. Delgado

Abstract: The affective revolution in Psychology has produced enough knowledge to implement abilities of emotional recognition and expression in robots. However, the emotional prototypes are still very basic, almost caricaturized ones. If the goal is constructing robots that respond flexibly, in order to fulfill market demands from different countries while respecting the moral values implicit in the social behavior of their inhabitants, then these robots will have to be programmed attending to detailed descriptions of the emotional experiences that are considered relevant in the interaction context in which the robot is going to be put to work (e.g., assisting people with cognitive or motor disabilities). The advantages of this approach are illustrated with an empirical study on contempt, the seventh basic emotion in Ekman’s theory, and one of the “rediscovered” moral emotions in Haidt’s New Synthesis. A phenomenological analysis of the experience of contempt in 48 Spanish subjects shows the structure and some variations –prejudiced, self-serving, and altruistic– of this emotion. Quantitative information was later obtained with the help of blind coders. Some spontaneous facial expressions that sometimes accompany self-reports are also shown. Finally, some future directions in the Robotics-Psychology intersection are presented (e.g., gender differences in social behavior).

Paper Nr: 46
Title:

METHODS AND TOOLS FOR MODELLING REASONING IN DIAGNOSTIC SYSTEMS

Authors:

Alexander P. Eremeev and Vadim N. Vagin

Abstract: The methods of case-based reasoning for a solution of problems of real-time diagnostics and forecasting in intelligent decision support systems (IDSS) is considered. Special attention is drawn to a case library structure for real-time IDSS and an application of this reasoning type for diagnostics of complex object states. The problem of finding the best current measurement points in model-based device diagnostics with using Assumption-based Truth Maintenance Systems (ATMS) is viewed. The new heuristic approaches of current measurement point choosing on the basis of supporting and inconsistent environments are presented. This work was supported by the Russian Foundation for Basic Research (projects No 08-01-00437 and No 08-07-00212).

Paper Nr: 76
Title:

STRATEGIES FOR ROUTE PLANNING ON CATASTROPHE ENVIRONMENTS - Coordinating Agents on a Fire Fighting Scenario

Authors:

Pedro Abreu and Pedro Mendes

Abstract: The concept of multi-agent systems (MAS) appeared when computer science researchers had the need to solve problems involving the simulation of real environments with several intervenients (agents). Solving these requires a coordination process between agents and in some cases negotiation. Such is the case of a catastrophe scenario with the need intervention to minimize the consequences, like for instance a fire. In this particular case the agents (firemen) must have a good coordination process to achieve as fast as they can their fire fighting position. The main goal of this project is to create an optimal strategy to calculate the best path to the fire fighting position. Tests were conducted on an existing simulator platform Pyrosim. Three factors have an important role: wind (intensity and direction), ground topology and vegetation variety. At the end the results were quite satisfactory, mainly in what concerns the agents main objective. The A* algorithm proved to be feasible for this particular problem, and the coordination process between agents was implemented successfully. In the future this project may have its agents ported to the BDI concept.

Paper Nr: 78
Title:

AGENT-BASED MODELING AND SIMULATION OF RESOURCE ALLOCATION IN ENGINEERING CHANGE MANAGEMENT

Authors:

Young B. Moon and Bochao Wang

Abstract: An engineering change (EC) refers to a modification of products and components including purchased parts or even supplies after product design is finished and released to the market. While any company involved in product development would have to deal with engineering changes, the area of engineering change management hasn't received much attention from the research community. It is partly because of its complexity and lack of appropriate research tools. In this paper, we present preliminary research results of modeling the engineering change management (ECM) process using an agent-based modeling and simulation technique. The aim of the research reported in this paper is to study optimal strategies of resource allocation for a company when it is dealing with two kinds of ECs: "necessary ECs" and "initialized ECs." We discuss results from these simulation models to illustrate some insights of ECM, and present several research directions from these results.

Paper Nr: 82
Title:

EVALUATING GENERALIZED ASSOCIATION RULES COMBINING OBJECTIVE AND SUBJECTIVE MEASURES AND VISUALIZATION

Authors:

Magaly Lika Fujimoto, Veronica Oliveira de Carvalho and Solange Oliveira Rezende

Abstract: Considering the user view, many problems can be found during the post-processing of association rules, since a large number of patterns can be obtained, which complicates the comprehension and identification of interesting knowledge. Thereby, this paper proposes an approach to improve the knowledge comprehensibility and to facilitate the identification of interesting generalized association rules during evaluation. This aid is realized combining objective and subjective measures with information visualization techniques, implemented on a system called RulEE-GARVis .

Paper Nr: 100
Title:

A TOOL FOR MEASURING INDIVIDUAL INFORMATION COMPETENCY ON AN ENTERPRISE INFORMATION SYSTEM

Authors:

Chui Young Yoon, In Sung Lee and Byung Chul Shin

Abstract: This study presents a tool that can efficiently measure individual information competency to execute the given tasks on an enterprise information system. The measurement items are extracted from the major components of a general competency. By factor analysis and reliability analysis, a 14-item tool is proposed to totally measure individual information capability. The tool’s application and utilization are confirmed by applying it to measuring the information competency of the individuals in an enterprise.

Paper Nr: 114
Title:

DESIGN A REVERSE LOGISTICS INFORMATION SYSTEM WITH RFID

Authors:

Carman Ka Man Lee, Tan Wil Sern and Eng Wah Lee

Abstract: Recently, reverse logistics management has become an integral part of the business cycle. This is mainly due to the need to be environmental friendly and urgent need to reuse scarce resources. Traditionally, reverse logistics activities have been a cost center for most businesses without generating extra revenue. However, due to recent increase in commodity and energy prices, reverse logistics management could eventually be a cost savings method. In this research, we propose using Radio Frequency Identification (RFID) technology to better optimize and streamline reverse logistics operations. Using RFID, we try to eliminate parts of the unknowns in reverse logistics flow that made reverse logistics model complicated. Furthermore, Genetic algorithm is used to optimize the place of initial collection center so as to cover the largest population possible in order to reduce logistics cost and provide convenience to end users. This study is based largely on literature review of past workings and also experiments are conducted on RFID hardware to test for its suitability. The significance of this paper is to adopt ubiquitous RFID technology and Genetic Algorithms for reverse logistics so as to obtain an economic reverse logistics network.

Paper Nr: 117
Title:

SIHC: A STABLE INCREMENTAL HIERARCHICAL CLUSTERING ALGORITHM

Authors:

Ibai Gurrutxaga, Olatz Arbelaitz, José I. Martín, Javier Muguerza, Jesús M. Pérez and Iñigo Perona

Abstract: SAHN is a widely used agglomerative hierarchical clustering method. Nevertheless it is not an incremental algorithm and therefore it is not suitable for many real application areas where all data is not available at the beginning of the process. Some authors proposed incremental variants of SAHN. Their goal was to obtain the same results in incremental environments. This approach is not practical since frequently must rebuild the hierarchy, or a big part of it, and often leads to completely different structures. We propose a novel algorithm, called SIHC, that updates SAHN hierarchies with minor changes in the previous structures. This property makes it suitable for real environments. Results on 11 synthetic and 6 real datasets show that SIHC builds high quality clustering hierarchies. This quality level is similar and sometimes better than SAHN's. Moreover, the computational complexity of SIHC is lower than SAHN's.

Paper Nr: 118
Title:

MODELING AND SIMULATION FOR DECISION SUPPORT IN SOFTWARE PROJECT WORKFORCE MANAGEMENT

Authors:

Bernardo Giori Ambrósio, José Luis Braga, Moisés A. Resende-Filho and Jugurta Lisboa Filho

Abstract: This paper presents and discusses the construction of a system dynamics model, focusing on key managerial decision variables related to workforce management during requirements extraction in software development projects. Our model establishes the relationships among those variables, making it possible to analyze and to better understand their mutual influences. Simulations conducted with the model made it possible to verify and foresee the consequences of risk factors (e.g. people turnover and high requirements volatility) on quality and cost of work. Three scenarios (e.g. optimistic, baseline and pessimistic) are set using data from previous studies and data collected in a software development company.

Paper Nr: 166
Title:

STATISTICAL DECISIONS IN PRESENCE OF IMPRECISELY REPORTED ATTRIBUTE DATA

Authors:

Olgierd Hryniewicz

Abstract: The paper presents a new methodology for making statistical decisions when data is reported in an imprecise way. Such situations happen very frequently when quality features are evaluated by humans. We have demonstrated that traditional models based either on the multinomial distribution or on predefined linguistic variables may be insufficient for making correct decisions. Our model, which uses the concept of the possibility distribution, allows to separate stochastic randomness from fuzzy imprecision, and provides a decision – maker with more information about the phenomenon of interest.

Paper Nr: 171
Title:

AN AGENT-BASED ARCHITECTURE FOR CANCER STAGING

Authors:

Miguel Miranda, António Abelha, Manuel Santos, José Machado and José Neves

Abstract: Cancer staging is the process by which physicians evaluate the spread of cancer. This is important, once in a good cancer staging system, the stage of disease helps to determine prognosis and assists in selecting therapies. A combination of physical examination, blood tests, and medical imaging is used to determine the clinical stage; if tissue is obtained via biopsy or surgery, examination of the tissue under a microscope can provide pathologic staging. On the other hand, good patient education may help to reduce health service costs and improve the quality of life of people with chronic or terminal conditions. In this paper it is endorsed a theoretical based model to support the provision of computer based information on cancer patients, and the computational techniques used to implement it. One's goal is to develop an interactive agent based computational system which may provide physicians with the right information, on time, that is adapted to the situation and process-based aspects of the patients's illness and treatment.

Paper Nr: 172
Title:

N-GRAMS-BASED FILE SIGNATURES FOR MALWARE DETECTION

Authors:

Igor Santos, Yoseba K. Penya, Jaime Devesa and Pablo G. Bringas

Abstract: Malware is any malicious code that has the potential to harm any computer or network. The amount of malware is increasing faster every year and poses a serious security threat. Thus, malware detection is a critical topic in computer security. Currently, signature-based detection is the most extended method for detecting malware. Although this method is still used on most popular commercial computer antivirus software, it can only achieve detection once the virus has already caused damage and it is registered. Therefore, it fails to detect new malware. Applying a methodology proven successful in similar problem-domains, we propose the use of n-grams (every substring of a larger string, of a fixed lenght \textit{n}) as file signatures in order to detect unknown malware whilst keeping low false positive ratio. We show that n-grams signatures provide an effective way to detect unknown malware.

Paper Nr: 281
Title:

INTELLIGENT SYSTEMS FOR RETAIL BANKING OPTIMIZATION - Optimization and Management of ATM Network System

Authors:

Darius Dilijonas, Virgilijus Sakalauskas, Dalia Kriksciuniene and Rimvydas Simutis

Abstract: The article analyzes the problems of optimization and management of ATM (Automated Teller Machine) network system, related to minimization of operating expenses, such as cash replenishment, costs of funds, logistics and back office processes. The suggested solution is based on merging up two different artificial intelligence methodologies – neural networks and multi-agent technologies. The practical implementation of this approach enabled to achieve better effectiveness of the researched ATMs network. During the first stage, the system performs analysis, based on the artificial neural networks (ANN). The second stage is aimed to produce the alternatives for the ATM cash management decisions. The performed simulation and experimental tests of method in the distributed ATM networks reveal good forecasting capacities of ANN.

Paper Nr: 330
Title:

FUZZY CRITICAL ANALYSIS SOFTWARE

Authors:

Mariana Dumitrescu

Abstract: For critical analysis of the Protected Electric Power Element-Protection System we built a complex software tool named “Fuzzy Event Tree Analysis” (FETA). We exemplified the critical analysis for the electric power protection system of radial and curled electrical line. The paper proposes and exemplifies how to use the proposed software in the critical analysis of the faults including abnormal workings, for the most important elements in Power Systems. Fuzzy Event Tree method is modelled and technical Safety results are computed by FETA.

Paper Nr: 343
Title:

BUSINESS ANALYSIS IN THE OLAP CONTEXT

Authors:

Emiel Caron and Hennie Daniels

Abstract: Today's multi-dimensional business or OnLine Analytical Processing (OLAP) databases have little support for sensitivity analysis. Sensitivity analysis is the analysis of how the variation in the output of a mathematical model can be apportioned, qualitatively or quantitatively, to different sources of variation in the input of the model. This functionality would give the OLAP analyst the possibility to play with ``What if...?''-questions in an OLAP cube. For example, with questions of the form: ``What happens to an aggregated value in the dimension hierarchy if I change the value of this data cell by so much?'' These types of questions are, for example, important for managers that want to analyse the effect of changes in sales, cost, etc., on a product's profitability in an OLAP sales cube. In this paper, we describe an extension to the OnLine Analytical Processing (OLAP) framework for business analysis in the form of sensitivity analysis.

Paper Nr: 347
Title:

W-NEG: A WORKFLOW NEGOTIATION SYSTEM

Authors:

Melise M. M. V. Paula, Danilo B. Lima, Luís Theodoro O. Camargo, Sergio Assis Rodrigues and Jano M. Souza

Abstract: It has been claimed that there are different methods for solving conflict; however, the main one is to solve conflicts through negotiations. This paper addresses one of the Negotiation Support Systems developed, namely NK-Sys and a workflow approach titled W-Neg. Negotiators often attempt to resolve their conflict through the use of intrinsic activities and their own skills. In W-Neg, we suggest a set of workflow models to tackle issues that may be conflicting during the negotiation table. As any decision-making process, negotiations arise from some well known steps. Therefore, the management of activities realized from these steps can be considered an alternative to improve negotiator’s preparation. In this proposal, workflow’s technology is aligned with this alternative once the main goal of workflow systems is to provide better business processes management.

Paper Nr: 365
Title:

FORECASTING TOTAL SALES OF HIGH-TECH PRODUCTS - Daily Diffusion Models and a Genetic Algorithm

Authors:

Masaru Tezuka and Satoshi Munakata

Abstract: In recent years, the release interval of high-tech consumer products such as mobile phones and portable media players is getting shorter. New models of mobile phones are released three times a year in Japan. The manufactures have to avoid dead stock because the value of the previous model drops sharply after the launch of the new model. In this paper, we propose a method to forecast the total sales of the products. The method utilizes diffusion models for forecasting. Only short-term sales record is available since the sales are forecasted one month after the release. In order to make effective use of the available data, we use a day as the time unit of forecasting. To apply the diffusion models to daily demand forecasting, we derive the difference equation representation of the models and propose discrete-time diffusion models. Day-of-week-dependent parameters are introduced to the models. The proposed method is tested on the data provided by a high-tech consumer products manufacturer. The result shows that the proposed method has an excellent forecasting ability.

Paper Nr: 376
Title:

IMPLEMENTATION OF INTENTION-DRIVEN SEARCH PROCESSES BY SPARQL QUERIES

Authors:

Olivier Corby, Catherine Faron-Zucker and Isabelle Mirbel

Abstract: Capitalisation of search processes becomes a real challenge in many domains. By search process, we mean a sequence of queries that enables a community member to find comprehensive and accurate information by composing results from different information sources. In this paper we propose an intentional model based on semantic Web technologies and models and aiming both at the capitalization, reuse and sharing of queries into a community and at the organization of queries into formalized search processes. It is intended to support knowledge transfer on information searches between expert and novice members inside a community. Intention-driven search processes are represented by textsc rdf datasets and operationalized by rules represented by textsc sparql queries and applied in backward chaining by using the CORESE semantic engine.

Paper Nr: 405
Title:

PERFORMING THE RETRIEVE STEP IN A CASE-BASED REASONING SYSTEM FOR DECISION MAKING IN INTRUSION SCENARIOS

Authors:

Jesus Conesa and Angela Ribeiro

Abstract: The present paper describes implementation of a case-based reasoning system involved in a crisis management project for infrastructural building security. The goal is to achieve an expert system, capable of making decisions in real-time to quickly neutralize one or more intruders that threaten strategic installations. This article presents development of usual CBR stages, such as case representation, retrieval phase and validation process, mainly focusing on the retrieving phase, approaching it through two strategies: similarity functions and decision tree structures. The designing case, such as the discretization values that are adopted, will also discussed. Finally, results on the retrieving phase performance are shown and analyzed according to well-known crossvalidations, such as k-validations or leave-one-out. This work is supported by project CENIT-HESPERIA.

Paper Nr: 406
Title:

MasDISPO_xt - Process Optimisation by Use of Integrated, Agentbased Manufacturing Execution Systems Inside the Supply Chain of Steel Production

Authors:

Sven Jacobi, Christian Hahn and David Raber

Abstract: The production of steel normally constitutes the inception of many supply chains in different areas of industry. Steel manufacturing companies are strongly affected by bull whip effects and other unpredictable influences along their production chains. Improving their operational efficiency is required to keep a competitive position on the market. Hence, flexible planning and scheduling systems are needed to support these processes, which are based on considerable amounts of data, hardly processable manually anymore. MasDISPO_xt is an agent-based generic online planning and scheduling system for the observation on MES-level of the complete supply chain of Saarstahl AG, a globally respected steel manufacturer. This paper concentrates on the horizontal and vertical integration of influences of rough planning on detailed and the other way around. Based on model-driven engineering business processes are modeled on CIM-level, a service oriented architecture is presented for the interoperability of all components, legacy systems and others wrapped behind services. Finally, an agent-based detailed planning and scheduling ensuring interoperability in horizontal and vertical direction is approached effectively.

Paper Nr: 444
Title:

PATTERN RECOGNITION FOR DOWNHOLE DYNAMOMETER CARD IN OIL ROD PUMP SYSTEM USING ARTIFICIAL NEURAL NETWORKS

Authors:

Marco A. D. Bezerra, Leizer Schnitman, M. de A. Barreto Filho and J. A. M. Felippe de Souza

Abstract: This paper presents the development of an Artificial Neural Network system for Dynamometer Card pattern recognition in oil well rod pump systems. It covers the establishment of pattern classes and a set of standards for training and validation, the study of descriptors which allow the design and the implementation of features extractor, training, analysis and finally the validation and performance test with a real data base.

Paper Nr: 453
Title:

KNOWLEDGE REPRESENTATION AND REASONING FOR AIR-NAILER COLOR CONFIGURATION BASED ON HSV SPACE

Authors:

Fan Jing, Shen Ying and Wang Zhihao

Abstract: Computer aided color design is one of the hotspots in industrial design area. Based on current color configuration research, this paper focuses on knowledge representation and reasoning of color design for air-nailer color configuration system. Firstly, color representation including color type and color value is discussed. Secondly, color reasoning based on HSV space among main color, first assistant color and second assistant color is investigated. At last, the application is represented, which has been tested efficiency and accurate in air-nailer color design.

Paper Nr: 459
Title:

AN APPLICATION OF THE STUDENT RELATIONSHIP MANAGEMENT CONCEPT

Authors:

Maria Beatriz Piedade and Maribel Yasmina Santos

Abstract: It is largely accepted that a way to promote the students’ success is by implementing processes that allow the students closely monitoring, the evaluation of their success and the approximation to their day-by-day activities. However, the implementation of these processes does not take place in many Higher Education Institutions due to the lack of appropriates institutional practices and an adequate technological infrastructure able to support these practices. In order to overcome these conceptual and technological limitations, this paper presents the Student Relationship Management System (SRM System). The SRM System supports the SRM concept and the SRM practice, also here presented, and it is implemented using the technological infrastructure that supports the Business Intelligence (BI) systems. The SRM system was used in an application case (in a real context) to obtain knowledge about the students and their academic behaviour. Such information is fundamental to support the decision-making associated with the teaching-learning process. All the obtained results are also presented and analysed in this paper.

Paper Nr: 510
Title:

INTEGRATING CASE-BASED REASONING WITH EVIDENCE-BASED PRACTICE FOR DECISION SUPPORT

Authors:

Expedito Carlos Lopes and Ulrich Schiel

Abstract: Evidence-Based Practice (EBP), an emergent paradigm, uses the premise that decision making can be based on scientific proofs available in reliable data bases, usually found on sites over the Internet. However, the procedures of the EBP do not provide mechanisms for retention of information and knowledge strategic of the individual solutions, which could facilitate the learning of different end-users, in the future. On the other hand, Case-Based Reasoning (CBR) uses the history of similar cases to support decision making. But, the retrieval of cases may not be sufficient to give support to the solution of problems. Since both research evidences as well as similar cases are important for decision-making, this paper proposes the integration of the two paradigms for problem-solution support, regarding complex problems. An example of the justice area is presented.

Paper Nr: 520
Title:

MODELLING COLLABORATIVE FORECASTING IN DECENTRALIZED SUPPLY CHAIN NETWORKS WITH A MULTIAGENT SYSTEM

Authors:

Jorge E. Hernández, Raúl Poler and Josefa Mula

Abstract: Information technology has become a strong modelling approach to support the complexities involved in a process. One example of this technology is the multiagent system which, from a decentralized supply chain configuration perspective, supports the information sharing processes that any of its node will be able to carry out to support its process in a collaborative manner, for example, the forecasting process. Therefore, this paper presents a novel collaborative forecasting model in supply chain networks by considering a multiagent system modelling approach. The hypothesis presented herein is that by collaborating in the information exchange process, less errors are made in the forecasting process.

Paper Nr: 552
Title:

CRONUS: A TASK MANAGEMENT SYSTEM TO SUPPORT SOFTWARE DEVELOPMENT

Authors:

Yura Ferreira, Sergio Assis Rodrigues, Divany Gomes Lima, Márcio Luiz Ferreira Duran, José Roberto Blaschek and Jano Moreira de Souza

Abstract: Currently, information technology professionals have become increasingly interested in factors that may have an impact on project management effectiveness and the success of projects. This article introduces a task management tool wich complements traditional tools to support the planning, controlling and execution of software development projects.

Paper Nr: 604
Title:

EVALUATING RISKS IN SOFTWARE NEGOTIATIONS THROUGH FUZZY COGNITIVE MAPS

Authors:

Sergio Assis Rodrigues, Efi Papatheocharous, Andreas S. Andreou and Jano Moreira De Souza

Abstract: Risks are inevitably and permanently present in software negotiations and they can directly influence the success or failure of negotiations. Risks should be avoided when they represent a threat and encouraged when they denote an opportunity. This work examines the influence of some negotiation elements in the area of risk and cost estimation, which are both factors that directly influence software development negotiation. In this work, risk quantification is proposed to translate its impact to measurable values that may be taken into consideration during negotiations. The model proposed involves an assessment tool based on basic negotiation elements – namely relationship, interests, cost and time – quantifying the influences among each other, and makes use of Fuzzy Cognitive Maps (FCMs) for developing the associations around basic risk elements on one hand and attaining an innovative risk quantification model for improved software negotiations on the other. Indicative scenarios are presented to demonstrate the efficacy of the proposed approach.

Area 3 - Information Systems Analysis and Specification

Full Papers
Paper Nr: 49
Title:

A Service Integration Platform for the Labor Market

Authors:

Mariagrazia Fugini

Abstract: Employment Services are an important topic in the agenda of local governments and in the EU due to their social implications, such as sustainability, workforce mobility, workers’ re-qualification paths, training for fresh graduates and stu-dents. The SEEMP system presented in this paper overcomes the issue in different ways: starting bilateral communications with neighbor border similar offices, building a federation of the local employment services, and merging isolate trials.

Paper Nr: 217
Title:

Developing Business Process Monitoring Probes to Enhance Organization Control

Authors:

Fabio Mulazzani, Barbara Russo and Giancarlo Succi

Abstract: This work present business process monitoring agents we developed called Probes. Probes enable to control the process performance aligning it to the company’s strategic goals. Probes offer a real time monitoring of the strategic goals achievement, also increasing the understanding of the company activities. In this paper Probes are applied to a practical case of a bus company. Probes were developed and deployed into the company ERP system and determined a significant change in the strategy of the company and a corresponding enhancement of the performances of a critical business process.

Paper Nr: 277
Title:

Text Generation for Requirements Validation

Authors:

Petr Kroha and Manuela Rink

Abstract: In this paper, we describe a text generation method used in our novel approach to requirements validation in software engineering by paraphrasing a requirements model expressed in UML by natural language. The basic idea is that after an analyst has specified a UML model based on a requirements description, a text may be automatically generated that describes this model. Thus, users and domain experts are enabled to validate the UML model, which would generally not be possible as most of them do not understand (semi-) formal languages, such as UML. A corresponding text generator has been implemented and examples will be presented.

Paper Nr: 280
Title:

Automatic Compositional Verification of Business Processes

Authors:

Luis E. Mendoza and Manuel I. Capel

Abstract: Nowadays the Business Process Modelling Notation (BPMN) has become a standard to provide a notation readily understandable by all business process (BP) stakeholders when it comes to carrying out the \emph{Business Process Modelling} (BPM) activity. In this paper, we present a new \emph{Formal Compositional Verification Approach} (FCVA), based on the Model-Checking} verification technique for software, integrated with a formal software design method called MEDISTAM-RT. Both are used to facilitate the development of the \emph{Task Model (TM) associated to a BP design. MEDISTAM-RT uses UML-RT as its graphical modelling notation and CSP+T formal specification language for temporal annotations. The application of FCVA is aimed at guaranteeing the correctness of the TM with respect to initial property specification derived from the BP rules. One instance of a BPM enterprise-project related to the \emph{Customer Relationship Management} (CRM) business is discussed in order to show a practical use of our proposal.

Paper Nr: 288
Title:

Actor Relationship Analysis for the i* Framework

Authors:

Shuichiro Yamamoto, Komon Ibe, June Verner, Karl Cox and Steven Bleistein

Abstract: The i* framework is a goal-oriented approach that addresses organizational IT requirements engineering concerns, and is considered an effective technique for analyzing dependencies between actors. However, the effectiveness and limitations of i* are unclear. When we modelled an industrial case with a large number of actors using i*, we discovered difficulties in (1) validating the completeness of the model, and (2) managing change. To solve these problems, we propose an actor relationship matrix analysis method (ARM) as a precursor to i* modeling, which we found aided in addressing the above two issues. This paper defines our method and demonstrates it with a case study. ARM enables requirements engineers to better ensure completeness of requirements in a repeatable and systematic manner that does not currently exist in the i* framework.

Paper Nr: 320
Title:

Towards Self-healing Execution of Business Processes based on Rules

Authors:

Mohamed Boukhebouze, Youssef Amghar, Aïcha-Nabila Benharkat and Zakaria Maamar

Abstract: In this paper we discuss the need to offer a self-healing execution of a business process within the BP-FAMA framework (Business Process Framework for Agility of Modelling and Analysis) presented in [1]. This will be done by identifying errors in the process specification and reacting to possible performance failures in order to drive the process execution towards a stable situation. To achieve our objective, we propose to model the high-level process by using a new declarative language based on business rules called BbBPDL (Rules based Business Process Description Language). In this language, a business rule has an Event-Condition-Action-Post condition-Post event-Compensation (ECA2PC) format. This allows translating a process into a cause/effect graph that is analyzed for the sake of ensuring the reliably of the business processes.

Paper Nr: 348
Title:

Towards Flexible Inter-enterprise Collaboration: A Supply Chain Perspective

Authors:

Boris Shishkov, Marten van Sinderen and Alexander Verbraeck

Abstract: Since neither uniformity nor pluriformity provide the answer to easing inter-enterprise collaborations, we address (inspired by relevant strengths of service-oriented architectures) the problem of supporting such collaborations from an infrastructure perspective. We propose architectural guidelines for interactively establishing a suitable inter-enterprise collaboration scheme, before the exchange of actual content takes place. The proposed guidelines stem from an analysis of some currently popular approaches concerning the achievement of inter-enterprise collaborations with ICT means. Taking into account the strong relevance of these issues to the Supply chain domain, we put our work in the Supply chain perspective. We also illustrate our architectural guidelines with an example from this domain. It is expected that the research contribution, reported in this paper, will be useful as an additional result concerning the (ICT-driven) inter-enterprise collaboration.

Paper Nr: 356
Title:

A Model-based Tool for Conceptual Modeling and Domain Ontology Engineering in OntoUML

Authors:

Alessander Botti Benevides and Giancarlo Guizzardi

Abstract: This paper presents a Model-Based graphical editor for supporting the creation of conceptual models and domain ontologies in a philosophically and cognitively well-founded modeling language named OntoUML. The Editor is designed in a way that, on one hand, it shields the user from the complexity of the ontological principles underlying this language. On the other hand, it reinforces these principles in the produced models by providing a mechanism for automatic formal constraint verification.

Paper Nr: 410
Title:

Concepts-based Traceability: Using Experiments to Evaluate Traceability Techniques

Authors:

Rodrigo Perozzo Noll and Marcelo Blois Ribeiro

Abstract: Knowledge engineering brings direct benefits to software development through the cognitive mapping between user expectations and software solution, checking system consistency and requirements conformance. One of the potential benefits of knowledge representation could be the definition of a standard domain terminology to enforce artifacts traceability. This paper proposes a concepts-based approach to drive traceability by the integration of knowledge engineering activities into the Unified Process. This paper also presents an experiment and its replication to evaluate precision and effort variables from concepts-based traceability and conventional requirements-based traceability techniques.

Paper Nr: 418
Title:

A Service-oriented Framework for Component-based Software Development: An i* Driven Approach

Authors:

Yves Wautelet, Youssef Achbany, Sodany Kiv and Manuel Kolp

Abstract: Optimizing is a fundamental concept in our modern mature economy. Software development also follows this trend and, as a consequence, new techniques are appearing over the years. Among those we find services oriented computing and component based development. The first gives the adequate structure and flexibility required in the development of large industrial software developments, the second allows recycling of generically developed code. This paper is at the borders of these paradigms and constitutes an attempt to integrate components into service-oriented modelling. Indeed, when developing huge multi-actor application packages, solutions to specific problems should be custom developed while others can be found in third party offers. FaMOS-C, the framework proposed in this paper, allows modelling such problems and directly integrates a selection process among different components based on their performance in functional and non-functional aspects. The framework is firstly depicted and then evaluated on a case study in supply chain management.

Paper Nr: 419
Title:

A Process for Developing Adaptable and Open Service Systems: Application in Supply Chain Management

Authors:

Yves Wautelet, Youssef Achbany, Manuel Kolp and Jean-Charles Lange

Abstract: Service-oriented computing is becoming increasingly popular. It allows designing flexible and adaptable software systems that can be easily adopted on demand by software customers. Those benefits are from primary importance in the context of supply chain management; that is why this paper proposes to apply ProDAOSS, a process for developing adaptable and open service systems to an industrial case study in outbound logistics. ProDAOSS is conceived as a plug-in for I-Tropos - a broader development methodology - so that it covers the whole software development life cycle. At analysis level, flexible business processes were generically modelled with different complementary views. First of all, an aggregate services view of the whole applicative package is offered; then services are split using an agent ontology - through the i* framework - to represent it as an organization of agents. A dynamic view completes the documentation by offering the service realization paths. At design stage, the service center architecture proposes a reference architectural pattern for services realization in an adaptable and open manner.

Paper Nr: 445
Title:

Business Process-awareness in the Maintenance Activities

Authors:

Lerina Aversano and Maria Tortorella

Abstract: In this paper we focus on the usefulness of business process knowledge for clarifying change requirements concerning the supporting software systems. To this aim the correctness and completeness of the change requirements impact were evaluated with and without the business process knowledge. Results of this preliminary empirical study are encouraging and indicate that the business information effectively provides a significant help to software maintainers.

Paper Nr: 502
Title:

BORM-points: Introduction and Results of Practical Testing

Authors:

Zdenek Struska and Robert Pergl

Abstract: This paper introduces the BORM-points method. The method is used for complexity estimation for information systems development. In the first part of the paper there is a detailed description of BORM-points and its specifics. In the second part there is a presentation of results of BORM-points application for real projects.

Paper Nr: 507
Title:

A Technology Classification Model for Mobile Content & Service Delivery Platforms

Authors:

Antonio Ghezzi, Filippo Renga and Raffaello Balocco

Abstract: The growing complexity of mobile “rich media” digital contents and services requires the integration of next generation middleware platform within Mobile Network Operators and Service Providers infrastructural architecture, for supporting the overall process of content creation, management and delivery. The purpose of the research is to design a technology classification model for Content & Services Delivery Platforms – CSDPs –, the core of Mobile Middleware Technology Providers – MMTPs – value proposition. A three-steps theoretical framework is provided, which identifies a set of significant classification variables to support the platforms positioning analysis. Afterwards, through adopting the multiple case studies research methodology, the model is applied to map the current CSDP offer presented by a sample of 24 companies, classified as MMTP, so to test the framework validity and get a valuable insight on the actual “state of the art” for such solutions. The main findings show that existing platforms possess major strengths – e.g. wide content portfolio manageable, integration between mobile and web channels and frequent recourse to SOA and Web Service approach –, while some drawbacks – poor support to context aware and location-based services, verticality and low interoperability of some proprietary products, criticality of content adaptation etc. – are still limiting the solutions effectiveness.

Paper Nr: 545
Title:

Patterns for Modeling and Composing Workflows from Grid Services

Authors:

Yousra Bendaly Hlaoui and Leila Jemni Ben Ayed

Abstract: We propose a set of composition patterns based on UML activity diagrams that support the different forms of matching and integrating Grid service operations in a workflow. The workflows are built on an abstract level using UML activity diagram language and following an MDA composition approach. In addition, we propose a Domain Specific Language (DSL) which extends the UML activity diagram notation allowing a systematic composition of workflows and containing appropriate data to describe a Grid service. These data are useful for the execution of the resulting workflow.

Paper Nr: 557
Title:

A Case Study of Knowledge Management Usage in Agile Software Projects

Authors:

Anderson Yanzer Cabral, Marcelo Blois Ribeiro, Ana Paula Lemke, Marcos Tadeu Silva, Mauricio Cristal and Cristiano Franco

Abstract: Agile Methodologies promote a group of principles which differ from Traditional Methods. In this way, one concrete difference is the manner of how the knowledge is managed during a software development process. Most proposals to knowledge management have been generated for Traditional Methods but have failed in Agile Projects because they focus on explicit Knowledge Management. This paper aims to present a case study with a detailed contributions taken from Lessons Learned for some issues related to Knowledge Management in a distributed project that make use of Agile Methodologies.

Paper Nr: 578
Title:

A Hierarchical Product-Property Model to Support Product Classification and Manage Structural and Planning Data

Authors:

Diego M. Giménez, Gabriela P. Henning and Horacio P. Leone

Abstract: Mass customization is one of the main challenges that managers face since it results in a proliferation of product data within the various organizational areas of an enterprise and across different enterprises. Effective solutions to this problem have resorted to generic bills of materials and to the grouping of product variants into product families, thus improving data management and sharing. However, issues like product family identification and formation, as well as data aggregation have not been dealt with by this type of approach. This contribution addresses these challenges and proposes a hierarchical data model based on the concepts of variant, variant set and family. It allows managing huge amounts of structural and non-structural information in a systematic way, with minimum replication. Besides, it proposes an unambiguous criterion, based on the properties of variants, for identifying families and variant sets. Finally, the approach can explicitly handle aggregated data which is intrinsic to generic concepts like families and variant sets. A case study is analyzed to illustrate the representation capabilities of this approach.

Paper Nr: 598
Title:

Collaborative, Participative and Interactive Enterprise Modeling

Authors:

Joseph Barjis

Abstract: Enterprise modeling is a daunting task to be carried out from a single perspective. A challenge to this whole complexity is conflicting descriptions given by different actors when business processes are documented. Often enterprise modeling takes rounds of iterations and clarification before the models are verified and validated. In order to expedite the modeling process and validity of the models, in this paper we propose an approach called collaborative, participative, and interactive modeling (CPI Modeling). The main objective of the CPI approach is to furnish an extended participation of actors that have valuable insight into the enterprise operations and business processes. Achieving this goal with any modeling method and language could be quite challenging. For CPI Modeling to succeed the modeling method should adhere to certain qualities. Next to the CPI Modeling approach, this paper discusses an enterprise modeling method that is simple, and yet powerful to capture intricate enterprise processes and simulate them.

Short Papers
Paper Nr: 9
Title:

A PETRI NET MODEL OF PROCESS PLATFORM-BASED PRODUCTION CONFIGURATION

Authors:

Linda L. Zhang and Brian Rodrigues

Abstract: In the literature process platform-based production configuration (PPbPC) has been proposed to obtain efficiency in product family production. In this paper, we present a holistic view of PPbPC, attempting to facilitate understanding and implementation. This is accomplished through dynamic modelling and graphical representation based on Petri nets (PNs) techniques. To cope with the modelling difficulties, we develop a new formalism of hierarchical colored timed PNs (HCTPNs) by integrating the basic principles of hierarchical PNs, timed PNs and colored PNs. In the formalism, three types of nets together with a system of HCTPNs are defined to address the fundamental issues in PPbPC. A case study of electronics products is also reported to demonstrate PPbPC using the proposed formalism.

Paper Nr: 77
Title:

A SIMULATION MODEL FOR MANAGING ENGINEERING CHANGES ALONG WITH NEW PRODUCT DEVELOPMENT

Authors:

Weilin Li and Young B. Moon

Abstract: This paper presents a process model for managing Engineering Changes (ECs) while other New Product Development (NPD) activities are being carried out in a company. The discrete-event simulation model incorporates Engineering Change Management (ECM) into an NPD environment by allowing ECs to share resources with regular NPD activities. Six model variables - (i) overlapping, (ii) NPD departmental interaction, (iii) ECM effort, (iv) resource constraints, (v) arrival rate, and (vi) resource using priority - are explored to identify how they affect lead time and productivity of both NPD and ECM. Decision-making suggestions for minimum EC impact are then drawn from an overall enterprise system level perspective based on the simulation results.

Paper Nr: 90
Title:

SECURITY ANALYSIS OF THE GERMAN ELECTRONIC HEALTH CARD’S PERIPHERAL PARTS

Authors:

Ali Sunyaev, Alexander Kaletsch, Christian Mauro and Helmut Krcmar

Abstract: This paper describes a technical security analysis which is based on experiments done in a laboratory and verified in a physician’s practice. The health care telematics infrastructure in Germany stipulates every physician and every patient to automatically be given an electronic health smart card (for patients) and a corresponding health professional card (for health care providers). We analyzed these cards and the peripheral parts of the telematics infrastructure according to the ISO 27001 security standard. The introduced attack scenarios show that there are several security issues in the peripheral parts of the German health care telematics. Based on discovered vulnerabilities we provide corresponding security measures to overcome these open issues and derive conceivable consequences for the nation-wide introduction of electronic health card in Germany.

Paper Nr: 116
Title:

ON ANALYZING WEB SERVICES INTERACTIONS

Authors:

Zakaria Maamar, Kokou Yétongnon, Khouloud Boukadi and Djamal Benslimane

Abstract: This paper looks into the types of preferences that could be associated with the interaction sessions that are established between componentWeb services engaged in a composition scenario. Examples of preferences could include partnership, which has a composition flavor, and privacy, which has a data flavor. Not all providers are willing to let their Web services interact and share data with peers without knowing for example the reputation of these peers, the rationale of submitting data to these peers, and the actions these peers would take over these data. Our contributions revolve around a Specification for Privacy and Partnership Preferences, and include a list of potential partnership- and privacy-oriented preferences that are relevant to Web services engaged in compositions, a list of corrective actions that are taken in case these preferences turn out unsatisfied at run-time, and graphical mechanisms that show these preferences during modeling composition scenarios.

Paper Nr: 164
Title:

AN APPROACH TO MODEL-DRIVEN DEVELOPMENT PROCESS SPECIFICATION

Authors:

Rita Suzana Pitangueira Maciel, Bruno Carreiro da Silva, Ana Patrícia Fontes Magalhães and Nelson Souto Rosa

Abstract: The adoption of MDA in software development is increasing and is widely recognized as an important approach for building software systems. Meanwhile, the use of MDA requires the definition of a software process that guides developers in the elaboration and generation of models. While first model-driven software processes have started to appear, an approach for describing them in such way that they may be better communicated, understood, reused and evolved systematically by the development team is lacking. In this context, this paper presents an approach for the specification of MDA processes based on specializations of some SPEM 2 concepts. In order to support and evaluate our approach a tool was developed and applied in a particular MDA process for specific middleware services development.

Paper Nr: 165
Title:

ONTOLOGY MAPPING BASED ON ASSOCIATION RULE MINING

Authors:

C. Tatsiopoulos and B. Boutsinas

Abstract: Ontology mapping is one of the most important processes in ontology engineering. It is imposed by the decentralized nature of both the WWW and the Semantic Web, where heterogeneous and incompatible ontologies can be developed by different communities. Ontology mapping can be used to establish efficient information sharing by determining correspondences among such ontologies. The ontology mapping techniques presented in the literature are based on syntactic and/or semantic heuristics. In almost all of them, user intervention is required. In this paper, we present a new ontology mapping technique which, given two input ontologies, is able to map concepts in one ontology onto those in the other, without any user intervention. It is based on association rule mining applied to the concept hierarchies of the input ontologies. We also present experimental results that demonstrate the accuracy of the proposed technique.

Paper Nr: 187
Title:

EVALUATION OF CASE TOOL METHODS AND PROCESSES - An Analysis of Eight Open-source CASE Tools

Authors:

Stefan Biffl, Christoph Ferstl, Christian Höllwieser and Thomas Moser

Abstract: There are many approaches for Computer-aided software engineering (CASE), often accomplished by ex-pensive tools of market-leading companies. However, to minimize cost, system architects and software de-signers look for less expensive, if not open-source, CASE tools. As there is often no common understanding on functionality and application area, a general inspection of the open-source CASE tool market is needed. The idea of this paper is to define a “status quo” of the functionality and the procedure models of open-source CASE tools by evaluating these tools using a criteria catalogue for the areas: technology, modelling, code generation, procedure model, and administration. Based on this criteria catalogue, 8 open-source CASE tools were evaluated with 5 predefined scenarios. Major result is: there was no comprehensive open-source CASE tool which assists and fits well to a broad set of developer tasks, especially since a small set of the evaluated tools lack a solid implementation in several of the criteria evaluated. Some of the evaluated tools show either just basic support of all evaluation criteria or high capabilities in a specific area, particularly in code generation.

Paper Nr: 190
Title:

SECURITY AND DEPENDABILITY IN AMBIENT INTELLIGENCE SCENARIOS - The Communication Prototype

Authors:

Alvaro Armenteros, Antonio Muñoz, Antonio Maña and Daniel Serrano

Abstract: Ambient Intelligence (AmI) refers to an environment that is sensitive, responsive, interconnected, contextualized, transparent, intelligent, and acting on behalf of humans. Security, privacy, and trust challenges are amplified with AmI computing model and need to be handled. Along this paper the potential of SERENITY in Ambient Intelligence (AmI) Ecosystems is described. Main objective of SERENITY consists on providing a framework for the automated treatment of security and dependability issues in AmI scenarios. Besides, a proof of concept is provided. In this paper, we describe the implementation of a prototype based on the application of the SERENITY model (including processes, artefacts and tools) to an industrial AmI scenario. A complete description of this prototype, along with all S&D artefacts used is provided in following sections.

Paper Nr: 195
Title:

A METHOD FOR REWRITING LEGACY SYSTEMS USING BUSINESS PROCESS MANAGEMENT TECHNOLOGY

Authors:

Gleison Samuel do Nascimento, Cirano Iochpe, Lucinia Heloisa Thom and Manfred Reichert

Abstract: Legacy systems are systems which execute useful tasks for the organization. Unfortunately, to maintain a legacy system running is a complex and costly task. Thus, in recent years several approaches were suggested to rewrite legacy systems using contemporary technologies. In this paper we present a method for rewriting legacy systems based on Business Process Management (BPM). The use of BPM for migrating legacy systems facilitates the monitoring and continuous improvement of the information systems existing in the organization.

Paper Nr: 202
Title:

A COMPREHENSIVE APPROACH FOR SOLVING POLICY HETEROGENEITY

Authors:

Rodolfo Ferrini and Elisa Bertino

Abstract: With the increasing popularity of collaborative application, policy-based access control models have become the usual approach for access control enforcement. In the last years several tools have been proposed in order to support the maintenance of such policy-based systems. However, no one of those tools is able to deal with heterogeneous policies that is policies that belong to different domains and thus adopting different terminologies. In this paper, we propose a stack of function that allow us to create a unified vocabulary for a multidomain policy set. This unified vocabulary can then be exploited by analysis tools improving accuracy in the results and thus applicability in real case scenarios. In our model, we represent the vocabulary of a policy adopting ontologies. With an ontology it is possible to describe a certain domain of interest providing richer information than a plain list of terms. On top of this additional semantic data it is possible to define complex functions such as ontology matching, merging and extraction that can be combined together in the creation of the unified terminology for the policies under consideration. Along with the definition of the proposed model, detailed algorithms are also provided. We also present experimental results which demonstrate the efficiency and practical value of our approach.

Paper Nr: 204
Title:

CORRELATION OF CONTEXT INFORMATION FOR MOBILE SERVICES

Authors:

Stephan Haslinger, Miguel Jiménez and Schahram Dustdar

Abstract: Location Based Services are a key driver in today's telecom market, even if the power of Location Based Services is not nearly exhausted in nowadays telecom systems. To build intuitive Location Based Services for mobile handsets one success factor is to cover a broad range of mobile handsets available on the market and to make the services context aware. Within the EUREKA project MyMobileWeb we implement a framework to obtain contextual information from handsets using various capabilities of the mobiles. Contextual Information is every information we can obtain from the handset and that can be used for any kind of service. The most obvious information is location information. Within our framework we built an architecture that can obtain location information from various sources and is not bound to any special handset capability. Furthermore the architecture can be used to obtain various other context information, such as e.g. battery level. This information in addition is then used to offer special services to the customer. For this a correlation of the context information has to be done, which is based on a correlation engine for contextual information. This paper presents a framework that can handle and correlate contextual information in a very flexible way.

Paper Nr: 223
Title:

BUSINESS PROCESS RE-ENGINEERING IN SUPPLY CHAINS EXAMINING THE CASE OF THE EXPANDING HALAL INDUSTRY

Authors:

Mohammed Belkhatir, Shalini Bala and Noureddine Belkhatir

Abstract: Due to several issues arising in the rapidly-expanding Halal industry, among them the production of non-genuine or contaminated products and meats, there is a need to develop effective solutions for ensuring authenticity and quality. This paper proposes the specification of a formalized supply chain framework for the production and monitoring of food and products. The latter enforces high-level quality of automated monitoring as well as shorter production cycles through enhanced coordination between the actors and organizations involved. Our proposal is guided by business process support to ensure quality and efficiency of product development and delivery. It moreover meets the requirements of industrial standards by adopting the Capability Maturity Model Integration’s highest process maturity level through establishing quantitative process-improvement objectives, proposing the integrated support of engineering processes, enforcing synchronization and coordination, drastic monitoring and exception handling. We then delve into some of the important technologies from the implementation point-of-view and align it with the formalized Halal framework. An Information Technology support instantiation is proposed leading to a use case scenario with technology identification.

Paper Nr: 237
Title:

DISCOVERY AND ANALYSIS OF ACTIVITY PATTERN CO-OCCURRENCES IN BUSINESS PROCESS MODELS

Authors:

Jean Michel Lau, Cirano Iochpe, Lucinéia Heloisa Thom and Manfred Reichert

Abstract: Research on workflow activity patterns recently emerged in order to increase the reuse of recurring business functions (e.g., notification, approval, and decision). One important aspect is to identify pattern co-occurrences and to utilize respective information for creating modeling recommendations regarding the most suited activity patterns to be combined with an already used one. Activity patterns as well as their co-occurrences can be identified through the analysis of process models rather than event logs. Related to this problem, this paper proposes a method for discovering and analyzing activity pattern co-occurrences in business process models. Our results are used for developing a BPM tool which fosters the modeling of business processes based on the reuse of activity patterns. Our tool includes an inference engine which considers the patterns co-occurrences to give design time recommendations for pattern usage.

Paper Nr: 253
Title:

MODELLING, ANALYSIS AND IMPROVEMENT OF MOBILE BUSINESS PROCESSES WITH THE MPL METHOD

Authors:

Volker Gruhn and André Köhler

Abstract: This paper introduces the Mobile Process Landscaping (MPL) method for modelling, analysing and improving mobile business processes. Current approaches for process modelling and analysis do not explicitly allow the consideration of typical mobility issues, e.g. location-dependent activities, mobile networks as resources and specifics of mobile information systems. Thus, our method focuses on the modelling and analysis of these characteristics, and is furthermore based on the process landscaping approach, supporting the easy creation of hierarchical models of distributed processes. The method comes with a specialized modelling notation and guidelines for the creation of process landscapes, context models, and business object models. Furthermore, it provides a catalogue of formally defined evaluation objectives, targeting at typical mobility issues. Each evaluation objective can automatically be tested on the created process landscape. Furthermore, the method includes a best practices catalogue with patterns for process and application improvements for typical mobility situations. A validation of the method is presented showing results from the method’s use in a real-world project.

Paper Nr: 262
Title:

RFID IN THE SUPPLY CHAIN: HOW TO OBTAIN A POSITIVE ROI - The Case of Gerry Weber

Authors:

Christoph Goebel, Christoph Tribowski, Oliver Günther, Ralph Tröger and Roland Nickerl

Abstract: Although the use of Radio Frequency Identification (RFID) in supply chains still lags behind expectations, its appeal to practitioners and researchers alike is unbowed. Apart from technical challenges such as low read rates and efficient backend integration, a major reason for its slow adoption is the high transponder price. We deliver a case study that investigates the financial, technical and organizational challenges faced by an apparel company that is currently introducing item-level RFID to monitor their supply chain. The company has developed an implementation strategy based on cross-company closed-loop multi-functional use of RFID transponders. This strategy leads to a positive ROI in their case and could serve as an example for other companies considering the introduction of item-level RFID.

Paper Nr: 274
Title:

UNCERTAINTIES MANAGEMENT FRAMEWORK - Foundational Principles

Authors:

Deniss Kumlander

Abstract: Uncertainties management is the crucial part of modern software engineering practices, which is mostly ignored by management and modern software development practices or dealt with reactively. In the result unhandled uncertainties do introduce a lot of threads and cause later delivery of projects or over-budgeting, which means the failure of the software engineering process in most cases. In this paper foundation principles of uncertainties management framework are defined.

Paper Nr: 285
Title:

A RULE-BASED APPROACH AND FRAMEWORK FOR MANAGING BEST PRACTICES - An XML-based Management using Pure Database System Utilities

Authors:

Essam Mansour and Hagen Höpfner

Abstract: Best practice refers to the best way to perform specified activities. In this paper we present our SIM approach that incorporates best practices as skeletal plans from which several entity-specific (ES) plans are generated. The skeletal and ES plans represent the complex information incorporating the best practices into organization activities. The paper also presents the SIM framework for managing complex information through three phases: specifying the skeletal plans, instantiating ES plans, maintaining these ES plans during their lifespan. The paper outlines an implementation, a case study and the evaluation of the SIM approach and framework.

Paper Nr: 289
Title:

ENTERPRISE SYSTEM DEVELOPMENT WITH INVARIANT PRESERVING - A Mathematical Approach by the Homotopy Lifting and Extension Properties

Authors:

Kenji Ohmori and Tosiyasu L. Kunii

Abstract: In this paper, a theoretical method for developing enterprise systems represented by the π-calculus is introduced. The method is based on the modern mathematics of homotopy theory. The homotopy lifting and extension properties are applied to developing systems in bottom-up and top-down ways with the incrementally modular abstraction hierarchy, where system development is carried out by climbing down abstraction hierarchy with adding invariants linearly. It leads to avoid combinatorial explosions causing an enormous waste of time and cost on testing. The system requirements and a state transition diagram drive the actions of an event by applying the HEP. Then, the state transition diagram and actions bring π-calculus processes by applying the HLP. These processes do not need testing because of invariant preserving.

Paper Nr: 296
Title:

AUTOMATIC GENERATION OF TEST CASES IN SOFTWARE PRODUCT LINES

Authors:

Pedro Reales Mateo, Macario Polo and Beatriz Pérez Lamancha

Abstract: This paper describes a method to automatically generate test cases with oracle in software product lines, where the management of variability and traceability are two indispensable requirements. These characteristics may be quite useful for the processing and automatic addition of the oracle to test cases, which is one of the main problems found, not only in the context of software product lines, but also in general testing literature. The paper describes a simple, but effective, way to deal with this problem, based on annotations to precode artifacts, metamodelling and transformation algorithms.

Paper Nr: 308
Title:

REASONING ABOUT CUSTOMER NEEDS IN MULTI-SUPPLIER ICT SERVICE BUNDLES USING DECISION MODELS

Authors:

Sybren de Kinderen, Jaap Gordijn and Hans Akkermans

Abstract: We propose a method, e3service , to reason about satisfying customer needs in the context of a wide choice of multi-supplier ICT service bundles. Our method represents customer needs, their ensuing consequences, and the services that realize those consequences in a service catalogue. This catalogue is then used by a reasoner, which elicits customer needs, computes their consequences, and automatically matches these consequences with services offered by suppliers. The e3service method has been implemented and tested in software to demonstrate its feasibility.

Paper Nr: 325
Title:

AN EVENT STRUCTURE BASED COORDINATION MODEL FOR COLLABORATIVE SESSIONS

Authors:

Kamel Barkaoui, Chafia Bouanaka and José Martín Molina Espinosa

Abstract: Distributed collaborative applications are characterized by supporting groups’ collaborative activities. This kind of applications is branded by physically distributed user groups, who cooperate by interactions and are gathered in work sessions. The effective result of collaboration in a session is the production of simultaneous and concurrent actions. Interactions are fundamental actions of a collaborative session and require being coordinated (synchronized) to avoid inconsistencies. We propose in the present work an event structure based model for coordination in a collaborative session, making possible interactions between participants and applications in a consistent way. The proposed model describes interdependencies, in the form of coordination rules, between different actions of the collaborative session actors.

Paper Nr: 331
Title:

MINING AND MODELING DECISION WORKFLOWS FROM DSS USER ACTIVITY LOGS

Authors:

Petrusel Razvan

Abstract: This paper introduces the concept of decision workflows, regarded as the sequence of actions of the decision maker in decision making process. We show how, based on a decision support system we previously created, we log the behaviour of the decision maker. The log is then imported into ProM framework and mined using existent process mining algorithms. The mined model will show us the control-flow perspective (which is the order of decision maker’s actions), the organisational perspective (which is the actual relationship among decision makers in group decisions), and the case perspective (what kind of support is required by each type of decisions). The aim of our research is to automate the creation of decision making patterns. Once obtained, the workflows can be merged into a financial enterprise model, which, properly validated, can become a financial reference model.

Paper Nr: 334
Title:

INFORMATION ARCHITECTURE OF FRACTAL INFORMATION SYSTEMS

Authors:

Jānis Grabis, Mārīte Kirikova and Jānis Vanags

Abstract: The fractal approach has emerged as a promising method for development of loosely coupled, distributed enterprise information systems. This paper investigates application of information architecture in development of fractal information systems. Principles of designing the information architecture of fractal information systems as well as rules for analyzing the information architecture are developed. These rules are used to obtain problem-domain representations specifically suited for needs of individual fractal entities. The usage of the information architecture in implementation of the fractal information system for the university’s study programme development problem is demonstrated.

Paper Nr: 351
Title:

A PROCESS FOR MULTI-AGENT DOMAIN AND APPLICATION ENGINEERING - The Domain Analysis and Application Requirements Engineering Phases

Authors:

Adriana Leite and Rosario Girardi

Abstract: Domain Engineering is a process for the development of a reusable application family in a particular domain problem, and Application Engineering, the one for the construction of a specific application based on the reuse of software artifacts in the application family previously produced in the Domain Engineering process. MADAE-Pro is an ontology-driven process for multi-agent domain and application engineering which promotes the construction and reuse of agent-oriented applications families. This article introduces an overview of MADAE-Pro emphasizing the description of its domain analysis and application requirements engineering phases and showing how software artifacts produced from the first are reused in the last one.

Paper Nr: 358
Title:

A REVISED MODELLING QUALITY FRAMEWORK

Authors:

Pieter Joubert, Stefanie Louw, Carina De Villiers and Jan Kroeze

Abstract: Systems modelling quality plays a critical role in the quality of the final system. Better quality systems are one aspect of addressing system failures which are still common today. This research paper studies quality frameworks for systems modelling techniques, presenting a revised framework. Several authors built their frameworks on the Lindland et al. (1994) conceptual model quality framework. Those frameworks are more abstract and static – they do not clearly illustrate the flow of information through the systems modelling process. The proposed framework makes it much easier to identify which quality aspects have to be in place at which points within the modelling process for it to be successful in its purpose. In addition, it creates awareness on issues such as the kind of skills and background knowledge that people, who are involved in this process, need to have.

Paper Nr: 398
Title:

LAYERED PROCESS MODELS - Analysis and Implementation (using MDA Principles)

Authors:

Samia Oussena and Balbir S. Barn

Abstract: One of the key challenges of Service-oriented architecture (SOA) is to build applications, services and processes that truly meet business requirements. Model-Driven Architecture (MDA) promotes the creation of models and code through model transformation. We argue in this paper that the same principle can be used to drive the development of SOA applications, using a Business Process Modelling (BPM) approach, supported by Business Process Modelling Notation (BPMN). We present an approach that allows the SOA application to be aligned with the business requirements, by offering guidelines for a systematic transformation of a business process model from requirements analysis into a working implementation.

Paper Nr: 402
Title:

EFFICIENT DATA STRUCTURES FOR LOCAL INCONSISTENCY DETECTION IN FIREWALL ACL UPDATES

Authors:

S. Pozo, R. M. Gasca and F. de la Rosa T.

Abstract: Filtering is a very important issue in next generation networks. These networks consist of a relatively high number of resource constrained devices and have special features, such as management of frequent topology changes. At each topology change, the access control policy of all nodes of the network must be automatically modified. In order to manage these access control requirements, Firewalls have been proposed by several researchers. However, many of the problems of traditional firewalls are aggravated due to these networks particularities, as is the case of ACL consistency. A firewall ACL with inconsistencies implies in general design errors, and indicates that the firewall is accepting traffic that should be denied or vice versa. This can result in severe problems such as unwanted accesses to services, denial of service, overflows, etc. Detecting inconsistencies is of extreme importance in the context of highly sensitive applications (e.g. health care). We propose a local inconsistency detection algorithm and data structures to prevent automatic rule updates that can cause inconsistencies. The proposal has very low computational complexity as both theoretical and experimental results will show, and thus can be used in real time environments.

Paper Nr: 409
Title:

DERIVING CANONICAL BUSINESS OBJECT OPERATION NETS FROM PROCESS MODELS

Authors:

Wang Zhaoxia, Wang Jianmin, Wen Lijie and Liu Yingbo

Abstract: Process model defines the business object operation orders. It is necessary to validate that the business object operation sequences are consistent with the reference business object life cycle. In this paper we propose an approach for deriving the canonical business object operation nets from process models which are modelled with workflow coloured Petri net, i.e. WFCP-net. Our approach consists of 3 steps. First, the tasks, which access a certain business object, are modelled with task operation nets. Second, the WFCP-net is rewritten with these task operation nets. Third, the business object operation net is reduced to the canonical one.

Paper Nr: 451
Title:

A VISION FOR AGILE MODEL-DRIVEN ENTERPRISE INFORMATION SYSTEMS

Authors:

N. R. T. P. van Beest, N. B. Szirbik and J. C. Wortmann

Abstract: Current model-driven techniques claim to be able to generate Enterprise Information Systems (EISs) based on enterprise models. However, these techniques still lack – after the initial deployment – the long-time desired flexibility, which allows that a change in the model can be immediately and easily reflected in the EIS. Interdependencies between models are insufficiently managed, requiring a large amount of human intervention to achieve and maintain consistency between models and the EIS. In this position paper a vision is presented, which describes how model-driven change of EISs should be structured in a coherent framework that allows for monitoring of interdependencies during model-driven change. Therefore, proposing fully automated consistency and pattern checks, the presented agile model-driven framework will reduce the amount of required human interventions during change. As a result, the cost and time span of model-driven EIS change can be reduced, thereby improving organizational agility.

Paper Nr: 454
Title:

SPECIFYING AND COMPILING HIGH LEVEL FINANCIAL FRAUD POLICIES INTO STREAMSQL

Authors:

Michael Edward Edge, Pedro R. Falcone Sampaio, Oliver Philpott and Mohammad Choudhary

Abstract: Fraud detection within financial platforms remains a challenging area in which criminals continue to thrive, breaching security mechanisms with increasingly innovative and sophisticated system attacks. Following the migration from reactive to proactive screening of transactional data to reduce an organisations fraud detection latency, fraud analysts now find themselves responsible for the maintenance of extensive fraud policy sets and their implementation as complex data stream processing procedures. This paper presents a Financial Fraud Modelling Language and policy mapping tool for high level expression and implementation of proactive fraud policies using stream processors. A key aspect of the approach is reduction of the complexity and implementation latency associated with proactive fraud policy management through abstraction of policy functionality using a conceptual level modelling language and innovative policy mapping tool. This paper focuses upon the rule based language model for high level expression of financial fraud policies and the associated compiler tool for specifying and mapping policies into StreamSQL.

Paper Nr: 480
Title:

ROBUST APPROACH TOWARDS CONTEXT DEPENDANT INFORMATION SHARING IN DISTRIBUTED ENVIRONMENTS

Authors:

Jenny Lundberg and Rune Gustavsson

Abstract: In the paper we propose a robust approach towards context dependant information modelling supporting trustworthy information exchange. Shortcomings and challenges of present approaches of syntax-based information modelling in dynamic context are identified. Basic principles are introduced and used to provide a robust approach towards meeting some of those challenges. The approach has a main aim of reducing brittleness of context dependant information and enabling intelligible information handling in distributed environments. The application domain is Emergency Service Centres, where the distributed handling of emergency calls in life critical situations of future change is in focus. The main contribution in the paper is a principled approach of use of abbreviations in dynamic emergency situations. Points of interaction for coordination are introduced as a tool supporting mappings of abbreviations between different contexts.

Paper Nr: 489
Title:

USING BPMN AND TRACING FOR RAPID BUSINESS PROCESS PROTOTYPING ENVIRONMENTS

Authors:

Alessandro Ciaramella, Mario G. C. A. Cimino, Beatrice Lazzerini and Francesco Marcelloni

Abstract: Business Process (BP) analysis aims to investigate properties of BPs by performing simulation, diagnosis and verification with the goal of supporting BP Management (BPM). In this paper, we propose a framework for BPM that relies on the BP Modeling Notation (BPMN). More specifically, we first introduce a method to deal with the BPM life cycle. Then, we discuss a platform to support this life cycle. The platform comprises three basic modules: a visual BPMN-based designer, a process tracing service, and a BP Manager for, respectively, the design, configuration and execution phases of the BPM life cycle. The proposed framework is particularly useful to perform business simulations such as what-if analysis, and to provide an efficient integration support within the supply-chain. In this study, we also show some practical application of this framework through a real-world experience on a leather firm, offering an environment for process communication as well as for time and cost analysis.

Paper Nr: 501
Title:

INFORMATION SYSTEMS SECURITY BASED ON BUSINESS PROCESS MODELING

Authors:

Joseph Barjis

Abstract: In this paper, we propose a conceptual model and develop a method for secure business process modeling towards information systems (IS) security. The emphasis of the proposed method is on social characteristics of systems, which is furnished through association of each social actor to their authorities, responsibilities and obligations. In turn, such an approach leads to secure information systems. The resulting modeling approach is a multi-method for developing secure business process models (secure BPM), where the DEMO transaction concept are used for business process modeling, and the Norm Analysis Method (organizational semiotics) for incorporating security safeguards into the model.

Paper Nr: 513
Title:

A SERVICE ORIENTED ENGINEERING APPROACH TO ENHANCE THE DEVELOPMENT OF AUTOMATION AND CONTROL SYSTEMS

Authors:

David Hästbacka and Seppo Kuikka

Abstract: In order for the manufacturing industry and closely related engineering disciplines to be competitive and productive, business structures and practices have to adapt to global changes and harder competition on all levels of operation. An engineering approach based on engineering services provides a foundation for commercial of the shelf services to be combined and utilized between engineering enterprises in the development of automation and control systems. A service based operations model would enable the utilization of expert services as a part of the development process to improve system quality, increase productivity and provide better work process management as well as allow easier integration to later life-cycle operations. This paper presents opportunities this kind of a conceptual approach offers and outlines some of the related research challenges that need further investigation.

Paper Nr: 515
Title:

APPLYING AN EVENT-BASED APPROACH FOR DETECTING REQUIREMENTS INTERACTION

Authors:

Edgar Sarmiento, Marcos R. S. Borges and Maria Luiza M. Campos

Abstract: At the software development cycle, it is in the requirements analysis phase that most of the problems that can compromise the delivery time and the development and maintenance costs must be identified and resolved. In general, the requirements obtained in this phase have different relationships with each other. Some of these relationships, commonly called negative interactions, make difficult or impossible the progress of some activities of the development process. The detection of interactions between requirements is an important activity that may prevent some of these problems and avoid their propagation throughout the remainder activities. Most of the existent research in this area only focuses on the requirements phase, mainly in the identification of conflict and/or inconsistency interactions. This paper presents a semi-formal event-based approach to model and identify the interactions between requirements, investigating the interactions that influence the other phases of the software development process.

Paper Nr: 518
Title:

AN EVOLUTIONARY APPROACH FOR QUALITY MODELS INTEGRATION

Authors:

Rodrigo Santos de Espindola and Jorge Luis Nicolas Audy

Abstract: The existing quality models (as ISO/IEC 15504, CMMI, MPS.BR, ITIL, COBIT) establish different processes and controls that must be adopted to achieve high software process reliability. Whereas it’s possible notice similarities and overlapping areas among them, a systematic approach to integrate quality models is not widely explored in the literature. In this work we propose an evolutionary approach to integrate quality models. The approach defines a method that can be executed in a systematic way and has a meta-model and a mapping table as outcome. The method is composed by two stages: the meta-model development and the meta-model stabilization. As this is an ongoing research, this work is presenting the application and the results from the execution of the first stage. As a result, a meta-model representing the structure of four different quality models was developed and its applicability was verified.

Paper Nr: 527
Title:

A SOCIO-SEMANTIC APPROACH TO THE CONCEPTUALISATION OF DOMAINS, PROCESSES AND TASKS IN LARGE PROJECTS

Authors:

Carla Pereira, Cristóvão Sousa and António Lucas Soares

Abstract: A case study involving a new method to support the collaborative construction of semantic artefacts in an inter-organizational context is described. The method aims at being applied, in particular, in the early phases of ontology development. We share the view that the development of semantic artefacts in collaborative networks of organizations should be based on a continuous construction of meaning, rather than pursuing the delivery of highly formalized accounts of domains. For that, our research is directed to the application of cognitive semantics results, specifically by developing and extending the Conceptual Blending Theory to cope with the socio-cognitive aspects of inter-organizational ontology development. An evaluation experiment for this method is accomplished in the scope of a large European project in the area of industrial engineering. The method evaluation and its results are described.

Paper Nr: 532
Title:

ENTERPRISE ONTOLOGY MANAGEMENT - An Approach based on Information Architecture

Authors:

Leonardo Azevedo, Sean Siqueira, Fernanda Araujo Baião, Jairo Souza, Mauro Lopes, Claudia Cappelli and Flavia Maria Santoro

Abstract: Ontologies have gained popularity, but its promises of being a key point to the solution of real-world problems and mitigating interoperability problems at a large scale have not yet been accomplished. Ontology management is at the kernel of this evolution, and there is a lack of adequate strategies and mechanisms for handling it in such a way to contribute to a better alignment between business and IT. This work proposes an approach for enterprise ontology management as part of an Information Architecture initiative. This approach provides a more complete foundation of the ontology lifecycle while guiding the enterprise in this management, by defining a set of processes, roles and competencies required for ontology management. It was applied to a big enterprise in Brazil at the Data Integration department.

Paper Nr: 554
Title:

FINDING REUSABLE BUSINESS PROCESS MODELS BASED ON STRUCTURAL MATCHING

Authors:

Han G. Woo

Abstract: Successfully integrating business processes with information systems has been a critical issue in many organizations. Such integrations should take place throughout the various stages of systems development to manage correct, traceable business process requirements. To support business process management (BPM) activities, many modeling formalisms and tools were proposed. Yet reuse of business process knowledge has been understudied although reuse practice is common, often relying on human recollection and reference models. This research proposes a tool support that assists reuse of business process models such as BPMN, EPC, and UML Activity Diagrams. In the suggested approach, the semantics of these formalisms are preserved in the conceptual graph format along with their instantiations and interrelationships. A structural data mining tool is then used to find reusable process models based on similarities in sequences of events and processes. This study can be applied to many reuse-related situations, namely retrieval of reusable process models given a problem, discovery of sequence patterns among process models, and suggesting the instances of (anti-) patterns for learning purpose.

Paper Nr: 573
Title:

METHOD MANUAL BASED PROCESS GENERATION AND VALIDATION

Authors:

Peter Killisperger, Georg Peters, Markus Stumptner and Thomas Stückl

Abstract: In order to use software processes for a spectrum of projects they are described in a generic way. Due to the uniqueness of software development, processes have to be adapted to project specific needs to be effectively applicable in projects. Siemens AG has started research projects aiming to improve this instantiation of processes. A system supporting project managers in instantiation of software processes at Siemens AG is being developed. It aims not only to execute instantiation decision made by humans but to automatically restore correctness of the resulting process.

Paper Nr: 580
Title:

REVERSE ENGINEERING A DOMAIN ONTOLOGY TO UNCOVER FUNDAMENTAL ONTOLOGICAL DISTINCTIONS - An Industrial Case Study in the Domain of Oil and Gas Production and Exploration

Authors:

Mauro Lopes, Giancarlo Guizzardi, Fernanda Araujo Baião and Ricardo Falbo

Abstract: Ontologies are commonly used in computer science either as a reference model to support semantic interoperability in several scenarios, or as a computer-tractable artifact that should be efficiently represented to be processed. This duality poses a tradeoff between expressivity and computational tractability that should be taken care of in different phases of an ontology engineering process. In this scenario, the choice of the ontology representation language is crucial, since different languages contain different expressivity and ontological commitments, reflecting on the specific set of available constructs. The inadequate use of a representation language, disregarding the goal of each ontology engineering phase, can lead to serious problems to database design and integration, to domain and systems requirements analysis within the software development processes, to knowledge representation and automated reasoning, and so on. This article presents an illustration of these issues by using a fragment of an industrial case study in the domain of Oil and Gas Exploration and Production. We make explicit the differences between two different representations of this domain, and highlight a number of concepts and ideas (tacit domain knowledge) that were implicit in the original model represented using a lightweight ontology language and that became explicit by applying methodological directives underlying an ontologically well-founded modeling language.

Paper Nr: 587
Title:

A BPMN BASED SECURE WORKFLOW MODEL

Authors:

Li Peng

Abstract: Secure workflow has become an important topic in both academia and industry. A secure workflow model can be used to analyze workflow systems according to specific security policies. This model is needed to allow controlled access of data objects, secure execution of tasks, and efficient management and administration of security. In this paper, I propose a BPMN-based secure workflow model to manage specific processes such as authorizations in executing tasks and accessing documents. The secure workflow model is constructed using BPMN-elements. The model is hierarchical and describes a secure workflow system at workflow layer, task layer and data layer. This model ensures the security properties of workflows: integrity, authorization and availability. Moreover, the model is easily readable and understandable.

Paper Nr: 589
Title:

SEMIOTICS - An Asset for Understanding Information Systems Communication

Authors:

Pedro Azevedo Rocha and Ângela Lacerda Nobre

Abstract: Problem solving resides on knowledge and/or imagination use, and in a dialogue, even in a monologue, established communication often has misunderstandings, prideful assumptions and crosstalks. The processing and communication of Information in an organisation are produced by creating, passing and utilising signs, whatever they may be, with or without the perception of its Semiotics. Considering we could conceive it in such way, and because we are three dimensional beings, the act of solving is endemic and unconscious to us. We do it using a cognitive mental and visual mean that resides on a hyper-environment based on signs, even before the creation of its doctrine. Therefore, Semiotics exists in and within us. With that definition in mind, why we do not use it and establish it on a daily basis in the classroom, at the workplace, in social affairs?

Paper Nr: 601
Title:

ENHANCING HIGH PRECISION BY COMBINING OKAPI BM25 WITH STRUCTURAL SIMILARITY IN AN INFORMATION RETRIEVAL SYSTEM

Authors:

Yaël Champclaux, Taoufiq Dkaki and Josiane Mothe

Abstract: In this paper, we present a new similarity measure in the context of Information Retrieval (IR). The main objective of IR systems is to select relevant documents, related to a user’s information need, from a collection of documents. Traditional approaches for document/query comparison use surface similarity, i.e. the comparison engine uses surface attributes (indexing terms). We propose a new method which combines the use of both surface and structural similarities with the aim of enhancing precision of top retrieved documents. In a previous work, we showed that the use of structural similarity in combination with cosine improves bare cosine ranking. In this paper, we compare our method to Okapi based on BM25 on the Cranfield collection. We show that structural similarities improve average precision and precision at top 10 retrieved documents about 50%. Experiments also address the term weighting influences on system performances.In this paper, we present a graph-based model which belongs to the vector space family. A vector space model considers each document as a vector in the term space. Each coordinate of a vector is a value representing the importance in a document or in a query of an indexing term. The vector space is defined by the set of terms that the system collects during the indexing phase. Many similarity measures such as Cosine, Jaccard, Dice… are used to determine how well a document corresponds to a query. Such measures determine local similarities between a document and a query on the basis of the terms they have in common. Our goal is to exploit another type of similarities called structural similarities. These similarities identify resemblances between elements on the basis of relationships they have. The structural relationship that we use originates from the fact that documents contain words and that words are contained in documents. The idea is to compare these documents through the similarities between the words they contain while similarities between words are themselves dependent on similarities between the documents they are contained in. In a previous paper, we have shown that the use of structural similarities alone was not sufficient to improve the performance of an IRS. In this paper, we present a different method that combines the use of both structural and surface similarities with the aim of enhancing high precision. Surface similarity is computed as an Okapi measure. Selected documents are then stored in a graph then sorted using a SimRank-based score. We call this 2-stages method OkaSim. We have performed different experiments with different term-weightings on the Cranfield Corpus and show that the structural similarities can improve an Okapi ranking. We show that those similarities can improve average precision more than 50% and precision at top 10 retrieved documents about 50% of an Okapi ranking. Tests and experiments also address the term weighting influences on system performances.

Paper Nr: 603
Title:

TOWARDS INTEGRATING PERSPECTIVES AND ABSTRACTION LEVELS IN BUSINESS PROCESS MODELING

Authors:

Ivan Markovic and Florian Hasibether

Abstract: In process-driven organizations, process models are the basis on which their supporting process-aware information systems are built. In this paper, we propose an approach for integrating perspectives and abstraction levels in business process modeling. First, we propose six process perspectives to adequately organize information about a business process. Second, we present the abstraction levels in process modeling and discuss metamodel projections on each of the levels. Third, we provide a comparison of our approach to other efforts in the field. With our approach, we make a step towards reducing the complexity of process modeling.

Paper Nr: 621
Title:

KEEPING THE RATIONALE OF IS REQUIREMENTS USING ORGANIZATIONAL BUSINESS MODELS

Authors:

Juliana Jansen Ferreira, Victor Manaia Chaves, Renata Mendes de Araujo and Fernanda Araujo Baião

Abstract: This paper proposes an approach for identifying, documenting and keeping the rationale of information systems requirements starting from the organizational business model. The approach comprises a method and the implementation of a supporting tool. The paper also discusses the results of preliminary case studies with this approach.

Paper Nr: 624
Title:

AN EFFECTIVE PROCESS MODELLING TECHNIQUE

Authors:

Nadja Damij

Abstract: This paper discusses the problem of process modelling and aims to introduce a simple technique called the activity table to find a better solution for the problem mentioned. The activity table is a technique used in the field of process modelling and improvement. Business process modelling is done by identifying the business processes and is continued by defining work processes and activities of each chosen business process. This technique is independent on the analyst ans his/her experience. It requires that each identified must be connected to its resource and its successor activity and in this manner contribute a great deal in developing a process model, which represents a true reflection to actual business process. The problem of conducting a surgery is used as an example to test the technique.

Posters
Paper Nr: 16
Title:

A KIND OF INFORMATION CONTENT APPLIED FOR THE HANDICAPPED AND DEMENTIA SITUATION CONSIDERING PHILOSOPHICAL ELEMENTS

Authors:

Masahiro Aruga, Shinwu Liu and Shuichi Kato

Abstract: There has been developed a communication system among the blind deaf persons and others, they have been able to communicate easily. And its information contents are needed to be discussed when the situation of their dementia is needed to be analyzed, and the information contents are needed to be estimated to correspond to changing signals of this system when they become dementia. In this paper, firstly the outline of an example of such communication system is described, and secondly the philosophical elements and its structures are discussed and such information contents are considered from the philosophical side. Here on the Peirce’s semiotic the interpretants which were defined by Peirce are introduced into the analytical method as the philosophical elements to analyze the information structure and communication system of handicapped and dementia situations. And the new information contents different from the Shannon’s ordinary information content are discussed to be introduced into the method of analysis of information process of handicapped and dementia situations. Thirdly considering the new information contents the concept of compartment system is introduced into the consideration of relations among interpretants, and as a result an example of concept of the new information content with regard to the elements of structure of the handicapped and dementia situation is proposed on the basis of discussing the compartment system characters.

Paper Nr: 70
Title:

EVALUATION OF UML IN PRACTICE - Experiences in a Traffic Management Systems Company

Authors:

Michel dos Santos Soares and Jos Vrancken

Abstract: This article is about research performed by the authors into improving the Software Engineering process at an company that develops software-intensive systems. The company develops road traffic management systems using the object-oriented paradigm, and UML as the visual modeling language. Our hypothesis is that UML has some difficulties/drawbacks in certain system development phases and activities. Many of these problems were reported in the literature normally after applying UML to one project and/or studying the language’s formal specifications and comparing with other languages. Unfortunately, few publications are based on surveys and interviews with practitioners, i.e., the developers and project managers that are using UML in real projects and are frequently facing these problems. As a matter of fact, some relevant questions were not fully addressed in past research, mainly related to UML problems in practice. The purpose of this text is to report the main findings and the proposed improvements based on other methods/languages, or even considering UML diagrams that are not often used. The research methodology involved surveys, interviews and action research with a system developed in order to implement the recommendations and evaluate the proposed improvements. The recommendations were considered feasible, as they are not proposing to radically change the current situation, which would involve higher costs and risks.

Paper Nr: 147
Title:

A COMPARISON OF SECURITY SAFEGUARD SELECTION METHODS

Authors:

Thomas Neubauer

Abstract: IT security incidents pose a major threat to the efficient execution of corporate strategies and business processes. Although companies generally spend a lot of money on security companies are often not aware of their spending on security and even more important if these investments into security are effective. This paper provides decision makers with an overview of decision support techniques, describes pros and cons of these methodologies.

Paper Nr: 159
Title:

PRIVACY FOR RFID-ENABLED DISTRIBUTED APPLICATIONS - Design Notes

Authors:

Mikaël Ates, Jacques Fayolle, Christophe Gravier, Jeremy Lardon and Rahul Garg

Abstract: The concern of this paper is RFID systems coupled with distributed applications. We do not treat known RFID attacks, we rather focus on the best way to protect the identity mapping, i.e. the association of a tag identifier, which can be obtained or deduced from the tags or communications including the tags, and the real identity of its carrier. We rely on a common use case of a distributed application and a modelling approach.

Paper Nr: 178
Title:

BRAIN PHYSIOLOGICAL CHARACTERISTIC ANALYSIS FOR SOFTWARE ANALYSIS SUPPORT ENVIRONMENTS

Authors:

Mikio Ohki and Haruki Murase

Abstract: In the field of Industrial Engineering, a number of studies on the production process have been conducted to achieve higher quality and productivity through the ages. On the other hand, as for software development, no study has been conducted on the environment optimized for brain work from the viewpoints of personality, motivation, and procedures to improve quality and productivity, since brain work is not visible. However, recently, devices that can measure the activation state of brain in a practical work environment are available. This paper analyzes software analysis tasks from the viewpoint of brain physiology based on the measurement results attained from the experiments using such a device and discusses the fundamental issues and challenges to implement an ideal software analysis support environment.

Paper Nr: 192
Title:

MODULAR BEHAVIOUR MODELLING OF SERVICE PROVIDING BUSINESS PROCESSES

Authors:

Ella Roubtsova, Lex Wedemeijer, Karel Lemmen and Ashley McNeile

Abstract: We examine possibilities for modularizing the executable models of Service Providing Business Processes in a way that allows reuse of common patterns across different applications. We argue that this requires that we can create independent models for different aspects of the process, and then compose these requirements related partial behavioral models to form a complete solution. We identify two areas of modeling that should be separable from the main, application specific, process model: the underlying subject matter with which the process is concerned, and standard reusable process-level behavior that is common across many processes. Using an example of a Service Providing Business Process concerned with Accreditation of Prior Learning we show that the Protocol Modeling approach has the capability to support modularization of functional and non-functional requirements, when other paradigms cannot completely support it.

Paper Nr: 208
Title:

THE PATTERNS FOR INFORMATION SYSTEM SECURITY

Authors:

Diego Abbo and Lily Sun

Abstract: The territory of IS is continuously improving its capacities, new architectures grow at a brisk pace and qualitatively the functional processes are deepening the degree of interaction inherent in the services provided. In the logical and/or physical territory of application, security management wisely faces the inherent problems in the domains of prevention, emergency and forensic investigation. If the visionary plans are good the security breakages will be going to be within the “residual risk profiles” of a congruous preventive risk analysis, and any further business development will match costs of security safeguards with the detrimental economical consequences of security breakages. In that perspective the IS security should have a larger field of application than the traditional security vision in the sense that the mere responsibility of a security domain should not only consider the immediate self interest of the owner of the asset. The IS security should consider the horizontal and hierarchical integrations and interoperability with all the correlated security systems or all the security needed systems, with an intrinsic capacity of evaluation any possible future model. The most efficient security should results the one that can individuate all the possible variables that constitute the basic for the patterns.

Paper Nr: 224
Title:

ALIGNING GOAL-ORIENTED REQUIREMENTS ENGINEERING AND MODEL-DRIVEN DEVELOPMENT

Authors:

Fernanda Alencar, Oscar Pastor, Beatriz Marín, Giovanni Giachetti and Jaelson Castro

Abstract: In order to fully capture the various system facets, a model should have not only software specifications but it should integrate multiple complementary views. Model-Driven Development (MDD) and Goal-Oriented Requirements Engineering (GORE) are two modern approaches that deal with models and that can complement each other. We want to demonstrate that a sound software production process can start with a GORE-based requirements model and can finish with advanced MDD-based techniques to generate a software product. Therefore, we intend to show that GORE and MDD can be successfully put together.

Paper Nr: 241
Title:

VAODA - A Viewpoint and Aspect-Oriented Domain Analysis Approach

Authors:

António Rodrigues and João Araújo

Abstract: Requirements approaches can be integrated to specify domain requirements. Among them, viewpoint-oriented approaches stand out by their simplicity, and efficiency in organizing requirements. However, none of them deals with modularization of crosscutting concerns. Aspect-Oriented Domain Analysis (AODA) is a growing area of interest as it addresses the problem of specifying crosscutting properties at domain analysis level. The goal is to obtain a better reuse at this abstraction level through the advantages of aspect orientation. This work proposes an AODA approach that integrates feature modeling and viewpoints.

Paper Nr: 292
Title:

TOWARDS A UNIFIED DOMAIN FOR FUZZY TEMPORAL DATABASES

Authors:

M. C. Garrido, N. Marín and O. Pons

Abstract: Temporal Databases (TDB) have as a primary aim to offer a common framework to those DB applications that need to store or handle temporal data of different nature or source, since they allow to unify the concept of time from the point of view of its meaning, its representation and its manipulation. At first sight, it may seem that incorporation of time to a DB is a direct and even simple task, but, on the contrary, it is a quite complex aim because time may be provided by different sources, with different granularities and meaning. The situation gets more complex when the time specification is not made in precise but in fuzzy terms, where together with the inherent problems of the time domain, we have to consider the imprecision factor. To deal with this problem, the first task to perform is to unify as much as possible the representation of time in order to be able to define the range and the semantics of the necessary operators to handle data of this type.

Paper Nr: 313
Title:

PROCESS INSTITUTIONALIZATION USING SOFTWARE PROCESS LINES

Authors:

Tomás Martínez-Ruiz, Félix García and Mario Piattini

Abstract: Software Process Institutionalization is an important step which must be carried out by organizations if they are to improve their processes, and must take place in a coherent manner in accordance with the organization’s policies. However, process institutionalization implies adapting processes from a set of the organization’s standard processes, and these standard processes must be continually maintained and updated through the standardization of best practices, since adaptation in itself cannot create capable processes. In this paper we propose using the philosophy of software process lines to design a cycle and specify a set of techniques and practices to institutionalize software processes. The cycle, techniques and practices include both process tailoring and process standardization to offer organizations an infrastructure with which to generate processes that are better fitted to their necessities. The use of our cycle will enable capable processes to be tailored from software process lines, and the analysis of these processes will permit the improvement of the organization’s set of standard processes and of the software process line.

Paper Nr: 397
Title:

A SYSTEMATIC LITERATURE REVIEW OF REQUIREMENTS ENGINEERING IN DISTRIBUTED SOFTWARE DEVELOPMENT ENVIRONMENTS

Authors:

Thaís Ebling, Jorge Luis Nicolas Audy and Rafael Prikladnicki

Abstract: On analyzing the main characteristics of Distributed Software Development (DSD) phenomenon, we can notice that they particularly affect Requirements Engineering (RE). With the evolution of this phenomenon, the result is an increasing in the existent literature. For this reason, in this paper we report from a systematic review of the DSD literature, where we looked for challenges and possible solutions related to RE in DSD environments. We also discuss gaps of this research area, which can be used to guide future researches.

Paper Nr: 443
Title:

APPLICABILITY OF ISO/IEC 9126 FOR THE SELECTION OF FLOSS TOOLS

Authors:

María Pérez, Kenyer Domínguez, Edumilis Méndez and Luis E. Mendoza

Abstract: The trend towards the use of Free/Libre Open Source Software (FLOSS) tools is impacting not only how we work and how productivity can be improved when it comes to developing software, but is also promoting new work schemes and business models, specifically small and medium-size enterprises. The purpose of this paper is to present the applicability of ISO/IEC 9126 for the selection of FLOSS Tools associated with three relevant software development disciplines, such as Analysis and Design, Business Models and Software Testing. The categories considered for the evaluation of these three types of tools are Functionality, Maintainability and Usability. From the results obtained from this research-in-progress, we have been able to determine that these three categories are the most relevant and suitable to evaluate FLOSS tools, thus pushing to the background all aspects associated with Portability, Efficiency and Reliability. Our long-term purpose is to refine quality models for other types of FLOSS tools.

Paper Nr: 448
Title:

A WORKFLOW LANGUAGE FOR THE EXPERIMENTAL SCIENCES

Authors:

Yuan Lin, Thérèse Libourel and Isabelle Mougenot

Abstract: Scientists in the environmental domains (biology, geographical information, etc.) need to capitalize, distribute and validate their experimentations of varying complexities. The concept of the scientific workflow is increasingly being considered to fulfill this requirement. After a short discussion of existing work, this article presents the first phase of the establishment of a workflow environment corresponding to the static part, i.e., a meta-model and a language dedicated to the design of process-chain models. We illustrate our proposal with a simple example from the spatial domain and conclude with perspectives that open up with the establishment of a workflow environment.

Paper Nr: 457
Title:

USING ONTOLOGIES WITH HIPPOCRATIC DATABASES - A Model for Protecting Personal Information Privacy

Authors:

Esraa Omran, Albert Bokma and Shereef Abu Al-Maati

Abstract: In the age of identity theft and the increased misuse of personal information held in databases, a crucial topic is the incorporation of privacy protection into database systems. Several initiatives have been created to address privacy protection in various forms, from legislation such as PIPEDA to policies such as P3P. Unfortunately, none of these effectively enforce protection of data. Recent solutions have emerged to enforcing data privacy & protection such as the Hippocratic database. But this technique has proved complex in practice. To overcome this deficiency we propose to use personal information ontologies in combination with Hippocratic databases. This method introduces a new way in reducing the complexity and in clearly identifying terms of privacy in the database architecture.

Paper Nr: 472
Title:

LINKING IT AND BUSINESS PROCESSES FOR ALIGNMENT - A Meta Model based Approach

Authors:

Matthias Goeken, Jan C. Pfeiffer and Wolfgang Johannsen

Abstract: Methods to optimize alignment between an enterprises business strategy and its IT strategy has been on the agenda of IS research since the beginning of last decade. Recognizing the growing impact of IT on the revenue side and on the cost side of P&L, it has become one of the most pressing issues of strategic IT management since then. One promising approach in gaining best results for alignment is to synchronize similar and related processes in both the business and the IT domain. In this contribution we present an approach for identifying components of processes in both domains on the basis of existing meta models. We consider this as a first step in developing a method based coherent view on both domains which finally will allow us to create a systematic and comprehensive alignment method.

Paper Nr: 475
Title:

MODELING WITH BPMN AND CHORDA: A TOP-DOWN, DATA-DRIVEN METHODOLOGY AND TOOL

Authors:

Andrea Catalano, Matteo Magnani and Danilo Montesi

Abstract: In this poster paper we present a methodology, named chorda, for the modeling of business processes with BPMN. Our methodology focuses on the peculiar features of this notation: its ability to illustrate different levels of abstraction, its support for both orchestration and choreography, and the representation of data flows. In particular, this last feature has been extended to allow a better mapping of real processes, where data often plays a fundamental role. To evaluate and tune the methodology, we have developed a tool supporting it.

Paper Nr: 481
Title:

PROACTIVE INSIDER-THREAT DETECTION - Against Confidentiality in Sensitive Pervasive Applications

Authors:

Joon S. Park, Jaeho Yim and Jason Hallahan

Abstract: The primary objective of this research is to mitigate insider threats against sensitive information stored in an organization’s computer system, using dynamic forensic mechanisms to detect insiders’ malicious activities. Among various types of insider threats, which may break confidentiality, integrity, or availability, this research is focused on the violations of confidentiality with privilege misuse or escalation in sensitive applications. We identify insider-threat scenarios and then describe how to detect each threat scenario by analyzing the primitive user activities, we implement our detection mechanisms by extending the capabilities of existing software packages. Since our approach can proactively detect the insider’s malicious behaviors before the malicious action is finished, we can prevent the possible damage proactively. In this particular paper the primary sources for our implementation are from the Windows file system activities, the Windows Registry, the Windows Clipboard system, and printer event logs and reports. However, we believe our approaches for countering insider threats can be also applied to other computing environments.

Paper Nr: 508
Title:

AN INTEGRATION-ORIENTED MODEL FOR APPLICATION LIFECYCLE MANAGEMENT

Authors:

Guenter Pirklbauer, Rudolf Ramler and Rene Zeilinger

Abstract: In the last years a new trend emerged in the software engineering tool market: Application Lifecycle Management (ALM). ALM aims at integrating processes and tools to coordinate development activities in software engineering. However, a common understanding or widely accepted definition of the term ALM has not yet evolved. Thus, companies introducing ALM are usually confronted with a wide range of solutions following different, vendor-specific interpretations. The aim of this paper is to clarify the concept of ALMand to provide guidance on how to develop an ALM strategy for software development organizations. The paper identifies key problem areas typically addressed by ALM and derives a model to relate the solution concepts of ALM to engineering and management activities. The work has been applied in the context of an improvement project conducted at an industrial company. This case shows how the model can be used to systematically develop a tailored, vendor-independent ALM solution.

Paper Nr: 533
Title:

CHALLENGES AND PERSPECTIVES IN THE DEPLOYMENT OF DISTRIBUTED COMPONENTS-BASED SOFTWARE

Authors:

Mariam Dibo and Noureddine Belkhatir

Abstract: Software deployment encompasses all post-development activities that make an application operational. It covers different activities such as packaging, installation, configuration, application start and updates. These deployment activities on large infrastructures are more and more complex leading to different works generally developed in an ad'hoc way and consequently specific to middleware such as for instance J2EE, .net or CCM. Every middleware designs specific deployment mechanisms and tools. The objective of this work is to propose a generic deployment approach independently of the target environments and to propose necessary abstractions to describe the software to be deployed, the deployment infrastructures and the deployment process with the identification and the organization of the activities to be carried out and the support for its execution. Our approach is model driven and our contribution is about a generic deployment framework.

Paper Nr: 551
Title:

DATABASE MARKETING PROCESS SUPPORTED BY ONTOLOGIES - System Architecture Proposal

Authors:

Filipe Mota Pinto, Alzira Marques and Manuel Filipe Santos

Abstract: This work proposes an ontology based system architecture which works as developer guide to a database marketing practitioner. Actually marketing departments handles daily with a great volume of data which are normally task or marketing activity dependent. This sometimes requires specific knowledge background and framework. This article aims to introduce an unexplored research at Database Marketing: the ontological approach to the Database Marketing process. Here we propose a generic framework supported by ontologies and knowledge extraction from databases techniques. Therefore this paper has two purposes: to integrate ontological approach in Database Marketing and to create domain ontology with a knowledge base that will enhance the entire process at both levels: marketing and knowledge extraction techniques. Our work is based in the Action Research methodology. At the end of this research we present some experiments in order to illustrate how knowledge base works and how can it be useful to user.

Paper Nr: 569
Title:

ON TECHNOLOGY INNOVATION - A Community Succession Model for Software Enterprise

Authors:

Qianhui Liang and Weihui Dai

Abstract: In this paper, we have taken an economic approach of technological innovations to studying the issue of evolution in software enterprise. Based on Lotka–Volterra equations and equilibrium formula, we have built a model for the dynamics of software technological innovations. The model is applied in order to derive the typical succession patterns of communities and a method for optimal co-existence and interactions among the communities. We validate our model by presenting a case study on the development process of the software enterprises.

Paper Nr: 610
Title:

MODELLING LOCATION-AWARE BEHAVIOUR IN WEB-GIS USING ASPECTS

Authors:

Ana Oliveira, Matias Urbieta, João Araújo, Armanda Rodrigues, Ana Moreira, Silvia Gordillo and Gustavo Rossi

Abstract: Web-GIS applications evolve fast as new requirements emerge constantly. Some of these requirements, particularly those related with spatial behaviours, might crosscut previous core application requirements. Conventional modelling techniques, which ignore the effect of crosscutting concerns (such as tangling and scattered behaviours) affect negatively the modularity and thus compromise application maintenance. In this paper we present and aspect-oriented approach to model crosscutting concerns in Web-GIS applications, particularly those related with spatial features. The process introduced in this paper starts with the identification and specification of crosscutting concerns, followed by the composition of these concerns, using the MATA language.

Paper Nr: 622
Title:

INSTRUCTIONAL DESIGN FOR JAVA ENTERPRISE COMPONENT TECHNOLOGY

Authors:

Marco Marcellis, Ella Roubtsova and Bert Hoogveld

Abstract: We present a method for development of instructions for teaching component based development of enterprise application. The method considers development of an enterprise application as a complex task that has to be taught as a whole. The requirements to the user access and to the back-end systems serve as a natural means for the choice of the leaning tasks. We have fixed the back-end system and separated the task classes based on the requirements to the user access: a web-browser and an application client. Inside of each of those task classes the requirements are decomposed into ”create”, ”retrieve”, ”update” and ”remove” groups of functionality. Each of these functionalities can be seen as a simple enterprise application. Each of functionalities can be of different level of complexity and may be implemented with local and remote clients and different types of components. The Java enterprise component technology is used for implementation of learning tasks.

Paper Nr: 635
Title:

INNOVATIVE HEALTH CARE CHANNELS - Towards Declarative Electronic Decision Support Systems Focusing on Patient Security

Authors:

Kerstin Ådahl, Jenny Lundberg and Rune Gustavsson

Abstract: The main contribution in this paper is a structured approach supporting validated quality of information sharing in Health Care settings. Protocols, at different system levels, are used as a method to design and implement intelligible information sharing structures. Our approach can preferably be seen as a context dependant information modelling framework that could be implemented using, e.g., web 2.0 techniques in a professional context. The main challenge is how to trustworthy convey and analyze the huge amounts of information available in Health Care contexts. Our innovative information health channel concept provides an approach to analyze and structure information as well as a contextual support towards increasing patient security.

Area 4 - Software Agents and Internet Computing

Full Papers
Paper Nr: 93
Title:

e-Learning in Logistics Cost Accounting Automatic Generation and Marking of Exercises

Authors:

Markus Siepermann and Christoph Siepermann

Abstract: This paper presents the concept and realisation of an e-learning tool that provides predefined or automatically generated exercises concerning logistics cost accounting. Stu-dents may practise where and whenever they like to via the Internet. Their solutions are marked automatically by the tool while considering consecutive faults and without any intervention of lecturers.

Paper Nr: 156
Title:

Towards Successful Virtual Communities

Authors:

Julien Subercaze, Christo El Morr, Pierre Maret, Adrien Joly, Matti Koivisto, Panayotis Antoniadis and Masayuki Ihara

Abstract: With the multiplication of communication medium, the increasing multi-partner global organizations, the remote working tendencies, dynamic teams, pervasive or ubiquitous computing Virtual Communities (VCs) are playing an increasing role in social organizations currently and will probably change profoundly the way people interact in the future. In this paper, we present our position on the key characteristics that are imperative to provide a successful VC as well as the future directions in terms of research, development and implementation. We identify three main aspects (business, techniques and social) and analyze for each of them the different components and their relationships.

Paper Nr: 271
Title:

A Multiagent-system for Automated Resource Allocation in the IT Infrastructure of a Medium-sized Internet Service Provider

Authors:

Michael Schwind and Marc Goederich

Abstract: In this article we present an agent-based system that is designed for the automated allocation of web hosting services to the IT resources at a medium-sized Internet service provider (ISP). The system is capable of finding a cost minimizing allocation of web hosting services on the distributed IT infrastructure of the ISP. For this purpose an agent which can independently determine a price for each package of web hosting services is assigned to each resource. The allocation mechanism employs a system of price and cost functions to form an economic model which guarantees a continuous capacity load for the companies' IT resources. According to the demand for web hosting services, resource agents can invest in the acquisition of IT infrastructure. These investments have to be amortized by the resource agents using the returns yielded by the web services sold to the ISP customers. By using real world demand profiles for web services packages taken from the operational systems of a medium-sized ISP, we were able to prove the stability of the resource allocation system.

Paper Nr: 306
Title:

AgEx: A Financial Market Simulation Tool for Software Agents

Authors:

Paulo André L. de Castro and Jaime S. Sichman

Abstract: Many researchers in the software agent field use the financial domain as a test bed to develop adaptation, cooperation and learning skills of software agents. However, there are no open source financial market simulation tools available, that are able to provide a suitable environment for agents with real information about assets and order execution service. In order to address such demand, this paper proposes an open source financial market simulation tool, called AgEx. This tool allows traders launched from distinct computers to act in the same market. The communication among agents is performed through FIPA ACL and uses a market ontology created specifically to be used for trader agents. We implemented several traders using AgEx and performed many simulations using data from real markets. The achieved results allowed to test and assess comparatively trader’s performance against each other in terms of risk and return. We verified that the effort to implement and test trader agents was significantly diminished by the use of AgEx. Furthermore, such results indicated new directions in trader strategy design.

Paper Nr: 322
Title:

A Domain Analysis Approach for Multi-agent Systems Product Lines

Authors:

Ingrid Nunes, Uirá Kulesza, Camila Nunes, Carlos J. P. de Lucena and Elder Cirilo

Abstract: In this paper, we propose an approach for documenting and modeling Multi-agent System Product Lines (MAS-PLs) in the domain analysis stage. MAS-PLs are the integration between two promising techniques, software product lines and agent-oriented software engineering, aiming at incorporating their respective benefits and helping the industrial exploitation of agent technology. Our approach explores the scenario of including agency features to existing web applications and is based on PASSI, an agent-oriented methodology, to which we added some extensions to address agency variability. A case study, OLIS (OnLine Intelligent Services), illustrates our approach.

Paper Nr: 324
Title:

A Reputation-based Game for Tasks Allocation

Authors:

Hamdi Yahyaoui

Abstract: We present in this paper a distributed game theoretical model for tasks allocation. During the game, each agent submits a cost for achieving a specific task. Each agent, that is offering a specific task, computes the so-called reputation-based cost, which is the product between the submitted cost and the inverse of the reputation value of the bidding agent. The game winner is the agent which has the minimal reputation-based cost. We show how the use of reputation allows a better allocation of tasks with respect to a conventional allocation where there is no consideration of the reputation as a criteria for allocating tasks.

Paper Nr: 386
Title:

Remote Controlling and Monitoring of Safety Devices using Web-Interface Embedded Systems

Authors:

A. Carrasco, M. D. Hernández, M. C. Romero, F. Sivianes and J. I. Escudero

Abstract: To date, access control systems have been hardware-based platforms, where software and hardware parts were uncoupled into different systems. The Department of Electronic Technology in the University of Seville, together with ISIS Engineering, have developed an innovative embedded system that provides all needed functions for controlling and monitoring remote access control systems through a built-in web interface. The design provides a monolithic structure, independence from outer systems, easiness in management and maintenance, conformation to the highest standards in security, and straightforward adaptability to applications other than the original one. We have accomplished it by using an extremely reduced Linux kernel and developing web and purpose-specific logic under software technologies with an optimal resource use.

Paper Nr: 407
Title:

Recognizing Customers' Mood in 3D Shopping Malls based on the Trajectories of their Avatars

Authors:

Anton Bogdanovych, Mathias Bauer and Simeon Simoff

Abstract: This paper proposes a method to assess the cognitive state of a human embodied as an avatar inside a 3-dimensional virtual shop. In order to do so we analyze the trajectories of the avatar movements to classify them against the set of predefined prototypes. To perform the classification we use the trajectory comparison algorithm based on the combination of the Levenshtein Distance and the Euclidean Distance. The proposed method is applied in a distributed manner to solving the problem of making autonomous assistants in virtual stores recognize the intentions of the customers.

Paper Nr: 465
Title:

Assembling and Managing Virtual Organizations out of Multi-party Contracts

Authors:

Evandro Bacarin, Edmundo R. M. Madeira and Claudia Medeiros

Abstract: Assembling virtual organizations is a complex process, which can be modeled and managed by means of a multi-party contract. Such a contract must encompass seeking consensus among parties in some issues, while simultaneously allowing for competition in others. Present solutions in contract negotiation are not satisfactory because they do not accommodate such a variety of needs and negotiation protocols. This paper shows our solution to this problem, discussing how our SPICA negotiation protocol can be used to build up virtual organizations. It assesses the effectiveness of our approach and discusses the protocol’s implementation.

Paper Nr: 497
Title:

A Video-based Biometric Authentication for e-Learning Web Applications

Authors:

Bruno Elias Penteado and Aparecido Nilceu Marana

Abstract: In the last years there was an exponential growth in the offering of Web-enabled distance courses and in the number of enrolments in corporate and higher education using this modality. However, the lack of efficient mechanisms that assures user authentication in this sort of environment, in the system login as well as throughout his session, has been pointed out as a serious deficiency. Some studies have been led about possible biometric applications for web authentication. However, password based authentication still prevails. With the popularization of biometric enabled devices and resultant fall of prices for the collection of biometric traits, biometrics is reconsidered as a secure remote authentication form for web applications. In this work, the face recognition accuracy, captured on-line by a webcam in Internet environment, is investigated, simulating the natural interaction of a person in the context of a distance course environment. Partial results show that this technique can be successfully applied to confirm the presence of users throughout the course attendance in an educational distance course. An efficient client/server architecture is also proposed.

Paper Nr: 521
Title:

Modeling JADE Agents from GAIA Methodology under the Perspective of Semantic Web

Authors:

Ig Ibert Bittencourt, Pedro Bispo, Evandro Costa, João Pedro, Douglas Véras, Diego Dermeval and Henrique Pacca

Abstract: Building multi-agent software systems is pointed out as a high complex task and researchers have raised different issues for building several applications. Therefore several AOSE methodologies and MAS frameworks have been proposed to facilitate the hard task of modeling and building high complex systems. However, those methodologies in an attempt to model complex systems end up being hard to use and to ensure the consistency between each part. On the other hand, ontologies have been considered useful for representing the knowledge of software engineering techniques and methodologies in order to provide an unambiguous terminology that can be shared, reusable, and ensure the consistence between the concepts involved. This paper proposes ontologies for specifying agents through the use of GAIA methodology, JADE Framework and SWRL rules to map the instances from GAIA ontology to JADE ontology. Finally, it is presented a case study and a discussion to demonstrate their use.

Paper Nr: 583
Title:

A Business Service Selection Model for Automated Web Service Discovery Requirements

Authors:

Tosca Lahiri and Mark Woodman

Abstract: Automated web service (WS) discovery, i.e. discovery without human intervention, is a goal of service-oriented computing. So far it is an elusive goal. The weaknesses of UDDI and other partial solutions have been extensively discussed, but little has been articulated concerning the totality of requirements for automated web service discovery. Our work has led to the conclusion that solving automated web service discovery will not be found through solely technical thinking. We argue that the business motivation for web services must be given prominence and so have looked to processes in business for the identification, assessment and selection of business services in order to assess comprehensively the requirements for web service discovery and selection. The paper uses a generic business service selection model as a guide to analyze a comprehensive set of requirements for facilities to support automated web service discovery. The paper presents an overview of recent work on aspects of WS discovery, proposes a business service selection model, considers a range of technical issues against the business model, articulates a full set of requirements, and concludes with comments on a system to support them.

Short Papers
Paper Nr: 53
Title:

A PROCESS FOR IMPLEMENTING ONLINE AND PHYSICAL BUSINESS BASED ON A STRATEGY INTEGRATION ASPECT

Authors:

Ing-Long Wu, Chu-Ying Fu and Chin-Wei Lee

Abstract: The growth of e-business has been experiencing a tremendous change in recent years. The initial prosperity in e-business has now been replaced by the recent trend in a series of bankruptcies and acquisitions. Business managers begin to consider new business models in terms of the integration of virtual and physical operations to support more and different services to customers. The key to success in e-business lies in how you carry out the integration between online and offline business. However, past research just literally discussed the important relevant issues of clicks-and-bricks strategy and channel management. There was lack of a complete and solid process to effectively guide the implementation of the integration. Therefore, this study proposed a three-step process based on a strategic perspective: (1) strategy integration (2) channel coordination, and (3) synergy realization. The results indicate that the right strategy integration significantly influences synergies realization through the significant channel coordinations.

Paper Nr: 61
Title:

USING GRIDS TO SUPPORT INFORMATION FILTERING SYSTEMS - A Case Study of Running Collaborative Filtering Recommendations on gLite

Authors:

Leandro N. Ciuffo and Elisa Ingrà

Abstract: Today’s business becomes increasingly computational-intense. Grid computing is a powerful paradigm for running ever-larger workloads and services. Commercial users have been attracted by this technology, which can potentially be exploited by industries and SMEs to offer new services with reduced costs and higher performance. This work aims at presenting a “gridified” implementation of a Recommender System based on the classic Collaborative Filtering algorithm. It also introduces the core services of the gLite middleware and discusses the potential benefits of using Grids to support the development of such systems.

Paper Nr: 103
Title:

A P2P IMPLEMENTATION FOR THE HIGH AVAILABILITY OF WEB SERVICES

Authors:

Zakaria Maamar, Mohamed Sellami, Samir Tata and Quan Z. Sheng

Abstract: This paper introduces a P2P-based approach to sustain the high-availability ofWeb services using a similaritybased replication strategies. To this end three strategies known as active, passive, and hybrid, are studied. This approach takes replication one step further by focussing on Web services that offer the same functionality as the originalWeb service does (i.e., the one to back up). This functionality similarity is built upon communities that gather similarly-functional Web services. To prove the suitability of the selected replication strategy for Web services high-availability, a P2P testbed on top of the JXTA platform is developed.

Paper Nr: 119
Title:

UBIQUITOUS SOFTWARE DEVELOPMENT DRIVEN BY AGENTS’ INTENTIONALITY

Authors:

Milene Serrano, Maurício Serrano and Carlos José Pereira de Lucena

Abstract: Ubiquitous computing is a novel computational paradigm in which the users’ mobility, the devices’ heterogeneity and the service omnipresence need is intrinsic and intense. In this context, the ubiquitous software development poses some particular challenges that are not yet dealt with by the traditional approaches found in the Software Engineering community. In order to improve the ubiquitous software development, this paper describes a detailed technological set based on multi-agent systems (MAS), goal-orientation, the BDI (Belief Desire Intention) model and various frameworks and conceptual models.

Paper Nr: 125
Title:

A SCHEME OF STRATEGIES FOR REAL-TIME WEB COLLABORATION BASED ON AJAX/COMET TECHNIQUES FOR LIVE RIA

Authors:

Walter Balzano, Maria Rosaria del Sorbo and Luca Di Liberto

Abstract: The last web applications advances, even if significant, doesn’t yet allow to substitute desktop applications: data are often redundant, user/web interfaces interactions notably differ from the user/desktop ones, data propagation isn’t fully instantaneous. This work shows some strategies for real-time client/server and client/client communications by server to manage multiuser collaboration issue in innovative ways: the user can use two techniques AJAX (Garrett J., 2005) and Comet (Russell A., 2006), recent and standard based, without installing plug-ins. Web 2.0 (O’Reilly T., 2007) was this project’s leading philosophy, by which a mash up of two features was mainly realized: a chat module and a WYSIWYG (What You See Is What You Get) text editor. The results show a potential cooperation among users interacting simultaneously using simple graphical interfaces, with a large work speeding up. Many applications exploit client/server interactions, directly sending updated information to connected users, as in online auctions (Wellman M. et al., 1998) multiuser cooperation (Tapiador A. et al., 2006) and e-learning (T. Chan, 2007) systems. Further details about the code and simple demos are available on http://people.na.infn.it/~wbalzano/AJAX.

Paper Nr: 131
Title:

ON THE HELPFULNESS OF PRODUCT REVIEWS - An Analysis of Customer-to-Customer Trust on eShop-Platforms

Authors:

Georg Peters and Vasily Andrianov

Abstract: In the last decade the market share of online stores in the retail sector has risen constantly and partly replaced traditional face-to-face shops in cities and shopping malls. One reason is that the cost structure of online shops is lower than of classic shops since the latter have to finance physical stores and sales personnel. On the one hand, this often leads to a strategic cost advantage and results lower selling prices. On the other hand, online stores normally do not provide personal consulting services as in traditionally face-to-face shops. However, online shops have established different forms of product consulting to compensate the missing personal advice of the sales persons in a physical shop - examples are product related hotlines or online chatrooms. An even cheaper possibility is to establish a recommendation system where previous buyers are invited to write reviews on a product. Some eShops even provide some kind of cascading system: a product review written by a customer can be classified as helpful or not by other customers. In our research we focus on this second cascade. The objective of our paper is to analyze if there are structures or rules that make product reviews written by customers helpful for other customers.

Paper Nr: 151
Title:

OFLOSSC, AN ONTOLOGY FOR SUPPORTING OPEN SOURCE DEVELOPMENT COMMUNITIES

Authors:

Isabelle Mirbel

Abstract: Open source development is a particular case of distributed software development having a volatile project structure, without clearly-defined organization, where activity coordination is mostly based on the use of Web technologies. The dynamic and free nature of this kind of project raises new challenges about knowledge sharing. In this context, we propose a semanticWeb approach to enhance coordination and knowledge sharing inside this kind of community. The purpose of this paper is to present OFLOSSC, the ontology we propose as the backbone of our approach. It is dedicated to the annotation of the community members and resources to support knowledge management services. While building OFLOSSC, our aim was twofold. On one hand, we wanted to reuse the ontologies on open source provided in the literature. On the other hand, we adopted a community of practice point of view to acquire the pertinent concepts for annotating resources of the open source development community. This standpoint emphasizes the sharing dimensions in knowledge management services.

Paper Nr: 177
Title:

INFLUENCING FACTORS FOR THE ADOPTION OF m-COMMERCE APPLICATIONS - A Multiple Case Study

Authors:

Saira Ashir Zeeshan, Yen Cheung and Helana Scheepers

Abstract: In the last few years mobile-commerce (m-commerce) has evolved providing its users set of applications that provide greater communication and flexibility. As these m-commerce applications become popular, organizations are adopting them to provide these services to their customers. This paper explores the influencing factors involved in the adoption of m-commerce applications by organizations. The research question that is addressed in this paper is: What factors affect an organization’s decision to adopt m-commerce? In order to answer the research question a research model adapted from the framework presented by Wang and Cheung (2004) is proposed. The research model examines the influencing factors under three levels: organizational, environmental and managerial. A multiple case study approach is employed as a research method to validate the research model. Findings from this research enhance research in m-commerce as well as assist businesses to better plan their adoption of m-commerce applications.

Paper Nr: 191
Title:

BLOG CLASSIFICATION USING K-MEANS

Authors:

Ki Jun Lee, Myungjin Lee and Wooju Kim

Abstract: With the recent exponential growth of blogs, a vast amount of important data has appeared on blogs. However, dynamic, autonomous, and personal features of such blogs make blog pages be quite different from those on general web pages in many aspects. As a result, this also causes many problems which cannot be handled properly by general search engines. One of the problems which we focused in this study is that blog pages are inherently poorly-organized and very much duplicated. This means the blog search engines cannot but provide the poorly-organized and duplicated results. To solve this problem, we propose a blog classification method using K-means and present a blog search result reorganization approach based on this method. In this study, firstly, we review the current status and their performances of blogs and blog search engines. Secondly, we adopt the K-means algorithm as a base algorithm and devise a blog title classification method to reorganize the blog titles resulted by a search engine. Finally, by implementing a prototype system of our algorithm, we evaluate our algorithm’s effectiveness, and present a conclusion and the directions for future work. We expect this algorithm can improve the current blog search engines’ usability.

Paper Nr: 203
Title:

FLESHING OUT CLUES ON GROUP PROGRAMMING LEARNING

Authors:

Thais Castro, Hugo Fuks, Leonardo Santos and Alberto Castro

Abstract: This work examines the findings of a case study carried out in the first semester of 2008, which uses a programming progression learning scheme, from the individual to group programming. This approach implies the generation of conversation logs among students as they take part in group programming. Supporting strategies are the evidences fleshed out through those logs. These strategies will guide the teacher on his inferences in the next group programming practical sessions.

Paper Nr: 205
Title:

MOBILE DEVICE LOCATION INFORMATION ACQUISITION FRAMEWORK FOR DEVELOPMENT OF LOCATION INFORMATION WEB APPLICATIONS - Design and Architecture

Authors:

Andrej Dolmac, Stephan Haslinger and Schahram Dustdar

Abstract: Mobile device location information based services are one of the key drivers in Telecommunication market today. The current development of mobile device location based service solutions is focusing on a development of a particular services, but there is nearly no effort to build a framework that does not focus on a particular location information based service, but instead, provides developers with a set of tools that would enable them easier and faster development of new services based on mobile device location information. Within the EUREKA project MyMobileWeb, framework was implement for acquisition of location information from mobile devices. Framework architecture enables obtaining location information from various mobile devices and is not bound to any special device type or capability. Furthermore the architecture can be used not only to obtain location information, but also to obtain any other information from mobile device, such as e.g. battery level.

Paper Nr: 305
Title:

GAIA4E: A TOOL SUPPORTING THE DESIGN OF MAS USING GAIA

Authors:

Luca Cernuzzi and Franco Zambonelli

Abstract: Different efforts have been devoted to improve the original version of Gaia methodology. The more relevant is the official extension of Gaia, exploiting the organizational abstractions to provide clear guidelines for the analysis and design of complex and open multiagent systems. However, now a day a successful design methodology should include some other strategic factors like the support of a specific CASE tool to simplify the work of the designer. Such a tool supporting the Gaia design process may facilitate the adoption of the methodology in the industrial arena. The present study introduces Gaia4E, a plug-in for the ECLIPSE environment which covers all the phases of Gaia allowing agent engineers to produce and document the corresponding models.

Paper Nr: 326
Title:

IDENTIYING HOMOGENOUS CUSTOMER SEGMENTS FOR LOW RISK EMAIL MARKETING EXPERIMENTS

Authors:

George Sammour, Benoît Depaire, Koen Vanhoof and Geert Wets

Abstract: Research in email marketing is divided into two broad areas spam and improving response rate. In this paper we propose a methodology which allows companies to experiment with their email campaigns to increase the campaigns’ response rate, This methodology is particularly suited for companies that are reluctant to experiment with their customer’s data fearing a drop of the response rate due to unsuccessful changes of the email campaign. The goals of this research have been achieved in two steps. Firstly, homogenous groups of customers are identified, eliminating largely any hindering heterogeneity. Secondly, customers that are not clicking and/or having a low click rate within their homogenous groups are identified.

Paper Nr: 383
Title:

EMAIL-BASED INTEROPERABILITY SERVICE UTILITIES FOR COOPERATIVE SMALL AND MEDIUM ENTERPRISES

Authors:

Hong-Linh Truong, Enrico Morten, Michal Laclavik, Thomas Burkhart, Martin Carpenter, Christoph Dorn, Panagiotis Gkouvas, Kalaboukas Konstantinos, Dario Luiz Lopez, Cesar Marin, Christian Melchiorre, Ana Pinuela, Martin Seleng and Dirk Werth

Abstract: As most SMEs utilize email for conducting business, email-based interoperability solutions for SMEs can have a profound effect on their business. This paper presents a utility-like system to support specialized SMEs to improve their business via emails by providing system, semantic and process interoperability solutions for individual SMEs and network of cooperative SMEs. We describe the concept of Email-based Interoperability Service Utility (EISU) and a software framework that provides almost zero cost interoperability solutions.

Paper Nr: 403
Title:

ONTOLOGY-BASED EMAIL CATEGORIZATION AND TASK INFERENCE USING A LEXICON-ENHANCED ONTOLOGY

Authors:

Prashant Gandhi and Roger Tagg

Abstract: Today’s knowledge workers are increasingly faced with the problem of information overload as they use current IT systems for performing daily tasks and activities. This paper focuses on one source of overload, namely electronic mail. Email has evolved from being a basic communication tool to a resource used – and misused – for a wide variety of purposes. One possible approach is to wean the user away from the traditional, often cluttered, email inbox, toward an environment where sorted and prioritized lists of tasks are presented. This entails categorizing email messages around personal work topics, whilst also identifying implied tasks in messages that users need to act upon. A prototype email agent, based on the use of a personal ontology and a lexicon, has been developed to test these concepts in practice. During the work, an opportunistic user survey was undertaken to try to better understand the current task management practices of knowledge workers and to aid in the identification of potential future improvements to our prototype.

Paper Nr: 455
Title:

AN ADAPTIVE MIDDLEWARE FOR MOBILE INFORMATION SYSTEMS

Authors:

Volker Gruhn and Malte Hülder

Abstract: The advances in mobile telecommunication networks as well as in mobile device technology have stimulated the development of a wide range of mobile applications. While it is sensible to install at least some components of applications on mobile devices to gain independence of rather unreliable mobile network connections, it is difficult to decide about the suitable application components and the amount of data to be provided.Because the environment of a mobile device can change and mobile business processes evolve over time, the mobile system should adapt to these changes dynamically to ensure productivity. In this paper, we present a mobile middleware that targets typical problems of mobile applications and dynamically adapts to context changes at runtime by utilizing reconfiguration triggers.

Paper Nr: 473
Title:

MONITORING SERVICE COMPOSITIONS IN MoDe4SLA - Design of Validation

Authors:

Lianne Bodenstaff, Andreas Wombacher, Roel Wieringa, Michael C. Jaeger and Manfred Reichert

Abstract: In previous research we introduced the MoDe4SLA approach for monitoring service compositions. MoDe4SLA identifies complex dependencies between Service Level Agreements (SLAs) in a service composition. By explicating these dependencies, causes of SLA violations of a service might be explained by malfunctioning of the services it depends on. MoDe4SLA assists managers in identifying such causes. In this paper we discuss how to evaluate our approach concerning usefulness for the user as well as effectiveness for the business. Usefulness is evaluated by experts who are asked to manage simulated runs of service compositions using MoDe4SLA. Their opinion on the approach is an indicator for its usefulness. Effectiveness is evaluated by comparing runtime results of SLA management using MoDe4SLA with runtime results of unsupported management. Criteria for effectiveness are cost reduction and increase in customer satisfaction.

Paper Nr: 500
Title:

PERSONALIZED MEDICAL WORKFLOW THROUGH SEMANTIC BUSINESS PROCESS MANAGEMENT

Authors:

Jiangbo Dang, Amir Hedayati, Ken Hampel and Candemir Toklu

Abstract: Business Process Management (BPM) systems are becoming the runtime governance of emerging Service Oriented Architecture (SOA) applications. They provide tools and methodologies to design and compose Web services that can be executed as business processes and monitored by BPM consoles. Ontology, as a formal declarative knowledge representation model, provides semantics upon which machine understandable knowledge can be obtained, and as a result, it makes machine intelligence possible. By combining ontology and BPM, Semantic Business Process Management (SBPM) provides a novel approach to align business processes from both business perspective and IT perspective. Current healthcare systems can adopt SBPM to make themselves adaptive, intelligent, and then serve patients better. Our ontology makes our vision of personalized healthcare possible by capturing all necessary knowledge for a complex personalized healthcare scenario including patient care, insurance policies, drug prescriptions, and compliances. This paper presents a hospital workflow management system that allows users, from physicians to administrative assistants, to create context-aware medical workflows, and execute them on-the-fly using an ontological knowledge base.

Paper Nr: 522
Title:

IMPLEMENTATION ISSUES OF THE INFONORMA MULTI-AGENT RECOMMENDER SYSTEM

Authors:

Lucas Drumond, Rosario Girardi, D’Jefferson Maranhão and Geraldo Abrantes

Abstract: Recommender systems can help professionals of the legal area to deal with the growth and dynamism of legal information sources. Infonorma is a multi-agent recommender system that recommends legal normative instruments to users according to their particular interests using both content-based and collaborative information filtering techniques. It has been modeled under the guidelines of the MAAEM methodology. This paper discusses the main implementation issues of the Infonorma system.

Paper Nr: 538
Title:

SEMANTIC INDEXING OF WEB PAGES VIA PROBABILISTIC METHODS - In Search of Semantics Project

Authors:

Fabio Clarizia, Francesco Colace, Massimo De Santo and Paolo Napoletano

Abstract: In this paper we address the problem of modeling large collections of data, namely web pages by exploiting jointly traditional information retrieval techniques with probabilistic ones in order to find semantic descriptions for the collections. This novel technique is embedded in a real Web Search Engine in order to provide semantics functionalities, as prediction of words related to a single term query. Experiments on different small domains (web repositories) are presented and discussed.

Paper Nr: 612
Title:

A CASE STUDY OF AUTOMATED INVENTORY MANAGEMENT

Authors:

Abrar Haider

Abstract: Maintaining a knowledgebase the location and condition of IT assets in large organisation is a problem. Knowledge of exact number of these assets is important for a number of reasons, which include controlling or eliminating procuring multiple assets for the same job or task; cost savings on maintenance contracts in accordance with the exact number of assets to be maintained; reduction in man hours spent in locating these assets; and checking theft. This paper presents a case study of a large sized Australian utility that is grappling with the same problems. In addition to these issues the company is also looking for looking for improved security of fixed/removable/mobile IT assets used by staff, integration of IT asset movement information with the staff access card and associated systems currently in use. This paper, therefore, presents a set of options available to the company to track the movement of their assets, and using the same technical architecture to integrate asset information with the information of the staff moving the asset.

Posters
Paper Nr: 69
Title:

KNOWLEDGE MANAGEMENT AND ECO-DESIGN SCOPES

Authors:

Rinaldo C. Michelini and Roberto P. Razzoli

Abstract: The eco-protection acts imply reorganising the manufacture business, towards product-service supply chains. The innovation can be tackled at two ranges: - the presetting of the knowledge management surroundings, to deal with the extended producers’ responsibility; - the incorporation of the entrepreneurial facility/function assembly, to accomplish the product-service delivery. The paper surveys the knowledge management frame, specifying the standard PLM aids, with account of the PLM-SE and PLM-RL requirements, giving especial attention on the alternative net-concern options, from virtual, to extended enterprises infra-structures.

Paper Nr: 113
Title:

EVALUATING THE ROLE OF INDIVIDUAL PERCEPTION IN IT OUTSOURCING DIFFUSION - An Agent based Model

Authors:

Marco Remondino, Marco Pironti and Roberto Schiesari

Abstract: The decision to adopt innovations has been investigated using both international patterns and behavioural theories. In this work, an agent-based (AB) model is created to study the spreading of innovation in enterprises (namely, the adoption of Information Technology outsourcing). The paradigm of AB simulation makes it possible to capture human factors, along with technical ones. This makes it possible to study the influence of perception, and the resulting bias. This work is focused on small and medium enterprises (SME), in which a restricted managing pool (sometimes just one person) decides whether to adopt a new technology or not, and bases the decision mainly on perception.

Paper Nr: 354
Title:

WEB-BASED COLLABORATIVE ENGINEERING FOR INJECTION MOLD DEVELOPMENT

Authors:

Dongyoon Lee, Kihyeong Song, Seokwoo Lee, Honzong Choi and Kwangyeol Ryu

Abstract: Injection molding is one of the most important manufacturing processes enabling present mass-production. The recent evolutions of technology related to injection molding result in the difficulties in developing the molds. As a result, a higher level of engineering technology should be considered for the developmental stage. This paper presents the web-based engineering collaboration among mold makers, experts, and product makers. Pre-examination and post-verification in the moldmaking process were investigated carefully and implemented in the web environment. Hundreds of engineering collaborations were conducted via developed systems. Surveyed results show that these collaborations help small and medium sized moldmaking enterprises reduce cost and delivery time, while they increase the quality of molds.

Paper Nr: 417
Title:

REAL-TIME RFID-ENABLED HEALTHCARE-ASSOCIATED MONITORING SYSTEM

Authors:

Belal Chowdhury, Xiaozhe Wang and Nasreen Sultana

Abstract: In healthcare context, the use of Radio Frequency Identification (RFID) technology has been employed to reduce health care costs and to facilitate the automatic streamlining of healthcare-associated infectious disease outbreak detection. RFID is playing an important role in monitoring processes in health facilities such as hospitals, nursing homes, special accommodation facilities and rehabilitation hospital. In this paper, we present a design of healthcare system using a real-time RFID-enabled application, called “Healthcare-associated Infectious Outbreak Detection and Monitoring System (HIODMS)”.

Paper Nr: 431
Title:

DYNAMIC-AGENTS TO SUPPORT ADAPTABILITY IN P2P WORKFLOW MANAGEMENT SYSTEMS

Authors:

A. Aldeeb, K. Crockett and M. J. Stanton

Abstract: Peer-to-Peer (P2P) technology is being recognized as a new approach to decentralized workflow management systems to overcome the limitation of the current centralized Client/Server workflow management systems. However, the lack of supporting adaptability and exception handling at instance level of this approach seems to be responsible for the weakness of the P2P workflow management systems. Dynamic agents can be used within P2P workflow management systems architecture to facilitate adaptability and exception handling. This paper presents a novel dynamic-agent P2P workflow management system which integrates three major technologies: software agents, P2P networking and workflow systems. The adoption of dynamic agents within P2P network can help in overcoming the adaptability problem, reducing the need for human involvement in exception handling and improves the effectiveness of the P2P workflow management system.

Paper Nr: 487
Title:

PROFILING COMPUTER ENERGY CONSUMPTION ON ORGANIZATIONS

Authors:

Rui Pedro Lopes, Luís Pires, Tiago Pedrosa and Vasile Marian

Abstract: Modern organizations depend on computers to work. Text processing, CAD, CAM, simulation, statistical analysis and so on are fundamental for maintaining high degree of productivity and competitiveness. Computers in an organization, consume a considerable percentage of the overall energy and, although a typical computer provides power saving technologies, such as suspending or hibernating components, this feature can be disabled. Moreover, the user can opt for never turning off the workstation. Well defined power saving policies, with appropriate automatic mechanisms to apply them, can provide significant power savings with consequent reduction of the power expense. With several computers in an organization, it is necessary to build the profile of the energy consumption. We propose installing a software probe in each computer to instrument the power comsumption, either diretly, by using a power meter, or indirectly, by measuring the processor performance counters. This distributed architecture, with software probes in every computer and a centralized server for persistence and decision making tries to save energy, by defining and applying organization level power saving policies.

Paper Nr: 550
Title:

A SIMULATOR FOR TEACHING AUTOMATAS AND FORMAL LANGUAGES - FLyA

Authors:

J. Raymundo Marcial-Romero, Pedro A. Alvarez Contreras, Héctor A. Montes Venegas and J. Antonio Hernández Servín

Abstract: Finite automata theory is taught in almost every computing program. Its importance comes from the broad range of applications in many areas. As any mathematically based subjects, automata theory content is full of abstractions which constructively explain theoretical procedures. In computing Engineering programs, teaching is mainly focus on procedures to solve a variety of Engineering problems. However, to follow this procedures using a conventional approach can be a tedious task for the student. In this paper, a computer based software as a supporting tool to aid in teaching Automata Theory is presented. The use of an educational methodology to design the tool, remarkably contributed to the acceptance of the software amongst students and teachers as compared with existing tools for the same purpose.

Paper Nr: 634
Title:

INFORMATION SPACES AS A BASIS FOR PERSONALISING THE SEMANTIC WEB

Authors:

Ian Oliver

Abstract: The future of the Semantic Web lies not in the ubiquity, addressability and global sharing of information but rather in localised, information spaces and their interactions. These information spaces will be made at a much more personal level and not necessarily adhere to globally agreed semantics and structures but rely more upon ad hoc and evolving semantic structures.

Area 5 - Human-Computer Interaction

Full Papers
Paper Nr: 84
Title:

An Agile Process Model for Inclusive Software Development

Authors:

Rodrigo Bonacin, Maria Cecília Calani Baranauskas and Marcos Antônio Rodrigues

Abstract: The Internet represents a new dimension for software development. It can be understood as an opportunity to develop systems to promote social inclusion and citizenship. These systems impose a singular way for developing software, where accessibility and usability are key requirements. This paper proposes a process model for agile software development, which takes into account these requirements. This method brings together multidisciplinary practices coming from Participatory Design, and Organizational Semiotics with concepts of agile models. The paper presents the instantiation of the process model during the development of a social network system, which aims to promote the social and digital inclusion. The results and the adjustments of the proposed development process model are also discussed.

Paper Nr: 106
Title:

Creation and Maintenance of Query Expansion Rules

Authors:

Stefania Castellani, Aaron Kaplan, Frédéric Roulland, Jutta Willamowski and Antonietta Grasso

Abstract: In an information retrieval system, a thesaurus can be used for query expansion, i.e. adding words to queries in order to improve recall. We propose a semi-automatic and interactive approach for the creation and maintenance of domain-specific thesauri for query expansion. Domain-specific thesauri are especially required in highly technical domains where the use of general thesauri for query expansion introduces more noise than useful results. Our semi-automatic approach to thesaurus creation constitutes a good compromise between fully manual approaches, which produce high-quality thesauri but at a prohibitively high cost, and fully automatic approaches, which are cheap but produce thesauri of limited quality. This article describes our approach and the architecture of the system implementing it, named Cannelle. It exploits user query logs and natural language processing to identify valuable synonymy candidates, and allows editors to interactively explore and validate these candidates in the context of a domain-specific searchable knowledge base. We evaluated the system in the domain of online troubleshooting, where the proposed method yielded an improvement in the quality of the search results obtained.

Paper Nr: 122
Title:

Stories and Scenarios Working with Culture-art and Design in a Cross-cultural Context

Authors:

Elizabeth Furtado, Albert Schilling and Liadina Camargo

Abstract: This paper discusses the use of user experience prototyping and theatrical techniques in two experiments to attaint the following objectives of the interaction design: to explore new ideas and to communicate the cross-cultural users’ needs and their expectations for iDTV (interactive Digital TeleVision) services. These two objectives are particularly important when are involved systems which are unknown to people. On the first experiment, we showed the implication of real stories for the construction of efficient interaction scenarios in a process of interaction design creation. On the second experiment, we showed the implication of stories told through theatre in order to have an objective communication of the purposes of iDTV services in a process of art and culture. The results are described by discussing the strengths and weaknesses of this approach.

Paper Nr: 220
Title:

End-user Development for Individualized Information Management: Analysis of Problem Domains and Solution Approaches

Authors:

Michael Spahn and Volker Wulf

Abstract: Delivering the right information at the right time to the right persons is one of the most important requirements of today’s business world. Nevertheless, enterprise systems do not always provide the information in a way suitable for the individual information needs and working practices of business users. Due to the complexity of enterprise systems, business users are not able to adapt these systems to their needs by themselves. The adoption of End-User Development (EUD) approaches, supporting end-users to create individual software artifacts for information access and retrieval, could enable better utilization of existing information and better support of the long tail of end-users’ needs. In this paper, we assess possibilities for improving information management through EUD, by analyzing relevant problem domains and solution approaches considering fundamental aspects of technology acceptance theories. The analysis is based on a questionnaire survey, conducted in three midsized German companies. We investigate the domains of information access and the flexible post-processing of enterprise data. Therefore we assess the importance of the respective domain for the work of end-users, perceived pain points, the willingness to engage in related EUD activities and the perceived usefulness of concrete EUD approaches we developed to address the respective domains.

Paper Nr: 254
Title:

Evaluating the Accessibility of Websites to Define Indicators in Service Level Agreements

Authors:

Sinésio Teles de Lima, Fernanda Lima and Káthia Marçal de Oliveira

Abstract: Despite the evolution of the Internet in the past years, people with disabilities still encounter obstacles to accessibility that impede adequate understanding of website content. Considering that Web accessibility is an added value to the website, it is important to have in place monitoring mechanisms and website accessibility controls. Service level agreements (SLA) can be used for this purpose, as they establish, by means of a service catalog, measurable indicators that certify the fulfillment of preset goals. This paper proposes a way to evaluate the accessibility of websites through a practical approach utilizing software measures, with the purpose of collecting data to define indicators for a service catalog of an SLA of website accessibility. Initial application of the approach was conducted on Brazilian federal government websites with the participation of ten users with visual disabilities. The study shows the viability of defining indicators.

Paper Nr: 270
Title:

Promoting Collaboration through a Culturally Contextualized Narrative Game

Authors:

Marcos Alexandre Rose Silva and Junia Coutinho Anacleto

Abstract: This paper describes a research about developing web narrative game to be used at school by teachers, considering students’ culture expressed in common sense knowledge, for storytelling, allowing teacher to create stories according to student’s cultural reality, and consequently enabling students to identify and get interested in collaborating with the teacher and other students to develop the story, being co-authors. Therefore this game can allow students to learn how to express themselves, which means to leave their imagination flow and to allow them to adequately understand and elaborate situations experienced in school, family and community with no impact on their real life. Through stories students also can learn how to work collaboratively, to help, and to be helped by their friends and teacher. This context is also applicable on companies, considering teamwork and the necessary role each one has to play for collaborative work.

Paper Nr: 275
Title:

Applying the Discourse Theory to the Moderator’s Interferences in Web Debates

Authors:

Cristiano Maciel, Vinícius Carvalho Pereira, Licínio Roque and Ana Cristina Bicharra Garcia

Abstract: This paper presents a methodology for supporting the moderation phase in DCC (Democratic Citizenship Community), a virtual community for supporting e-democratic processes in e-life systems and applications. Based on the Government-Citizen Interactive Model, the DCC encompasses an innovative debate structure, as well as the moderator’s participation based on Discourse Theory, specially concerning argumentative mistakes. Concerning the moderator’s role, efforts have been made in order to improve the formalization of arguments and opinions while maintaining the usability of the platform. This research focuses on the moderator’s participation via a case study and the experiment is analyzed in a Web debate.

Paper Nr: 333
Title:

EXPERTKANSEIWEB: A Tool to Design Kansei Website

Authors:

Anitawati Mohd Lokman, Nor Laila Md Noor and Mitsuo Nagamachi

Abstract: In this paper we describe our research work involved in the development of a design tool for developing Kansei website. The design tool to facilitate Kansei web design is named, ExpertKanseiWeb and was developed based on results obtained from the application of the Kansei Engineering method to extract website visitors’ Kansei responses. From the Partial Least Square (PLS) analysis performed, a guideline composed from the website design elements and the implied Kansei was established. This guideline becomes the basis for the systems structure of the design tool. ExpertKanseiWeb system consists of a Client Interface (CI), system controller and Kansei Web Database System (KWDS). Client can benefit from the tools as it offers easy knowledge interpretation of the guideline and presents examples to the design of Kansei website.

Paper Nr: 388
Title:

Evaluation of Information Systems Supporting Asset Lifecycle Management

Authors:

Abrar Haider

Abstract: Performance evaluation is a subjective activity that cannot be detached from the human understanding, social context, and cultural environment, within which it takes place. Apart from these, information systems evaluation faces certain conceptual and operational challenges that further complicate the process of performance evaluation. This paper deals with the issue of performance evaluation of information system utilised for engineering asset lifecycle. The paper highlights that these information systems not only have to enable asset management strategy, but also are required to inform the same for better lifecycle management of the critical asset equipment utilised in production or service environments. Evaluation of these systems, thus, calls for ascertaining both hard as well as soft benefits to the organisation and their contribution to organizational development. This, however, requires that evaluation exercise identifies alternatives and choices and in doing so becomes a strategic advisory mechanism that supports information systems planning, development, and management processes. This paper proposes a comprehensive evaluation methodology for evaluation of information systems utilised in managing engineering assets. This methodology is learning centric, provides feedback that facilitates actionable organizational learning, and thus allows the organisation to engage in generative learning based continuous improvement.

Paper Nr: 408
Title:

Fast Unsupervised Classification for Handwritten Stroke Analysis

Authors:

Won-Du Chang and Jungpil Shin

Abstract: This paper considers the unsupervised classification of handwritten character strokes in regards to speed, since handwritten strokes prove challenging with their high and variable dimensions for classification problems. Our approach employs a robust feature detection method for brief classification. The dimension is reduced by selecting feature points among all the points within strokes, and thus the need to compare stroke signals between two different dimensions is eliminated. Although there are some remaining problems with misclassification, we safely classify strokes according to handwriting styles through a refinement procedure. This paper illustrates that the equalization problem, the severe difference in small parts between two strokes, can be ignored by summing all of the differences via our method.

Paper Nr: 495
Title:

Interfaces for All: A Tailoring-based Approach

Authors:

Vânia Paula de Almeida Neris and M. Cecília C. Baranauskas

Abstract: Following the precepts of Universal Design, we must develop systems that allow access to software applications without discrimination and making sense for the largest possible audience. One way to develop Interfaces for All is to offer to users the possibility of tailoring the interface according to their preferences, needs and situations of use. Tailorable solutions already present in some interactive systems do not consider the diversity of users, as they do not include illiterates and non-expert users, for example. The development of systems to be used for all requires a socio-technical vision for the problem. In this paper we present and discuss the first results of a work based on the reference of Organizational Semiotics and Participatory Design to elicit users’ and system’s requirements, and design a software solution with the direct participation of those involved, under the design for all principles.

Paper Nr: 498
Title:

Integrating Google Earth within OLAP Tools for Multidimensional Exploration and Analysis of Spatial Data

Authors:

Sergio Di Martino, Sandro Bimonte, Michela Bertolotto and Filomena Ferrucci

Abstract: Spatial OnLine Analytical Processing solutions are a type of Business Information Tool meant to support a Decision Maker in extracting hidden knowledge from data warehouses containing spatial data. To date, very few SOLAP tools are available, each presenting some drawbacks reducing their flexibility. To overcome these limitations, we have developed a web-based SOLAP tool, obtained by suitably integrating into an ad-hoc architecture the Geobrowser Google Earth with a freely available OLAP engine, namely Mondrian. As a consequence, a Decision Maker can perform exploration and analysis of spatial data both through the Geobrowser and a Pivot Table in a seamlessly fashion. In this paper, we illustrate the main features of the system we have developed, together with the underlying architecture, using a simulated case study.

Paper Nr: 564
Title:

An Automated Meeting Assistant: A Tangible Mixed Reality Interface for the AMIDA Automatic Content Linking Device

Authors:

Jochen Ehnes

Abstract: We describe our approach to support ongoing meetings with an automated meeting assistant. The system based on the AMIDA Content Linking Device aims at providing relevant documents used in previous meetings for the ongoing meeting based on automatic speech recognition. Once the content linking device finds documents linked to a discussion about a similar subject in a previous meeting, it assumes they may be relevant for the current discussion as well. We believe that the way these documents are offered to the meeting participants is equally important as the way they are found. We developed a mixed reality, projection based user interface that lets the documents appear on the table tops in front of the meeting participants. They can hand them over to others or bring them onto the shared projection screen easily if they consider them relevant. Yet, irrelevant documents donÕt draw too much attention from the discussion. In this paper we describe the concept and implementation of this user interface and provide some preliminary results.

Paper Nr: 579
Title:

Investigation of Error in 2D Vibrotactile Position Cues with respect to Visual and Haptic Display Properties: A Radial Expansion Model for Improved Cuing

Authors:

Nicholas G. Lipari, Christoph W. Borst and Vijay B. Baiyya

Abstract: We present a human factors experiment aimed at investigating certain systematic errors in locating position cues on a rectangular array of vibrating motors. Such a task is representative of haptic signals providing supplementary information in a collaborative or guided exploration of some dataset. In this context, both the visual size and presence of correct answer reinforcement may be subject to change. Consequently, we considered the effects of these variables on position identification. We also investigated five types of stimulus points based on the stimulus' position relative to adjacent motors. As visual size increases, it initially demonstrates the dominant effect on error magnitude, then correct answer feedback plays a role in larger sizes. Radial error, roughly the radial difference in the stimulus and response position, modeled the systematic error. We applied a quadratic fit and estimated a calibration procedure within a 2-fold cross validation.

Paper Nr: 611
Title:

Developing a Model to Measure User Satisfaction and Success of Virtual Meeting Tools in an Organization

Authors:

A. K. M. Najmul Islam

Abstract: Information Systems evaluation is an important issue for the managers in an organization. But it is very difficult to evaluate. A lot of works have been done in this particular area. Many methods have been developed over the years to evaluate the information systems. The easiest and mostly used evaluation method is to measure the user satisfaction of a system. But there is no unique model that can be used to evaluate all kind of information systems. In this paper, we propose a model to measure user satisfaction of virtual meeting tools used in an organization. We verify the model by conducting two surveys and applying different statistical analysis on the collected survey data. The proposed model measures the user satisfaction and success based on six factors namely content, accuracy, ease of use, timeliness, system reliability and system speed.

Short Papers
Paper Nr: 10
Title:

END-USER DEVELOPMENT IN A GRAPHICAL USER INTERFACE SETTING

Authors:

Martin Auer, Johannes Pölz and Stefan Biffl

Abstract: In many areas, software applications must be highly configurable - using a pre-defined set of options or preferences is not flexible enough. One way to improve an application’s flexibility is to allow users to change parts of the source code - and thus the application’s behavior - on-the-fly; modern languages like Java greatly facilitate this by providing reflection features. Such an approach, however, is often limited to user-defined mathematical formulas, e.g., in software like cash flow engines, reporting tools etc. This paper applies the concept to a more generic area: the graphical representation of diagrams in a UML tool. Users can create new types of graphical elements by directly programming how the elements are drawn, all within the UML tool, and at run time. The approach is flexible, and the user-defined extensions are consistent with the tool’s core source code.

Paper Nr: 107
Title:

EVALUATION OF ANTHROPOMORPHIC USER INTERFACE FEEDBACK IN AN EMAIL CLIENT CONTEXT AND AFFORDANCES

Authors:

Pietro Murano, Amir Malik and Patrik O'Brian Holt

Abstract: This paper describes an experiment and its results concerning research that has been going on for a number of years in the area of anthropomorphic user interface feedback. The main aims of the research have been to examine the effectiveness and user satisfaction of anthropomorphic feedback. The results are of use to all user interface designers. Currently the work in the area of anthropomorphic feedback does not have any global conclusions concerning its effectiveness and user satisfaction capabilities. This research is investigating finding a way for reaching some global conclusions concerning this type of feedback. This experiment, concerned the context of downloading, installing and configuring an email client which is part of the domain of software for systems usage. Anthropomorphic feedback was compared against an equivalent non-anthropomorphic feedback. The results indicated the anthropomorphic feedback to be more effective and preferred by users. It was also the aim to examine the types of feedback in relation to Affordances. The results obtained can be explained in terms of the Theory of Affordances.

Paper Nr: 109
Title:

STUDY FOR ESTABLISHING DESIGN GUIDELINES FOR MANUALS USING AUGMENTED REALITY TECHNOLOGY - Verification and Expansion of the Basic Model Describing “Effective Complexity”

Authors:

Miwa Nakanishi, Shun-ichro Tamamushi and Yusaku Okada

Abstract: Augmented reality (AR), a technology that enables users to see an overlay of digital information on the real view, is expected to be applied more and more to human factor innovation. It has been suggested that a manual using AR (AR manual) improves accuracy and efficiency in actual work situations. To make an AR manual practical, hardware such as see-through display or retinal scanning display has been actively developed. However, software, i.e., information provided by the AR manual, has not been sufficiently examined. In a recent study, the authors built a mathematical model that describes the “effective complexity” of an AR manual according to the complexity of the real view. In this study, the basic model is verified by applying it to the AR manual for a realistic task. Furthermore, the applicability of the basic model is examined by assuming two different situations where either accuracy or efficiency has high priority. The objective of this study is to establish rough but practical guidelines for designing an AR manual.

Paper Nr: 163
Title:

APLYING COLORS BASED ON CULTURE KNOWLEDGE TO MOTIVATE COLLABORATION ON THE WEB

Authors:

Ana Luiza Dias, Junia C. Anacleto, Luciana M. Silveira, Rosângela Ap. D. Penteado and Laís A. S. Meuchi

Abstract: Collaborative and Participatory work via Web tends to increase due to teams of professionals’ needs in accomplishing tasks separated by distance and time, which demands more effort and stronger commitment from each person. In this context, it must be considered cultural differences, which interfere with the performance of each individual and either promote or deny the communication intended for the group. This paper aims to discuss a multidisciplinary analysis about colors and stimuli in computing environment using Common Sense knowledge, considering the cultural association people make between colors and actions, emotions and objects, showing how it can motivate users to access and participate in collaborative tasks through stimuli using color symbolically built in the culture.

Paper Nr: 184
Title:

ENABLING CONTEXT-ADAPTIVE COLLABORATION FOR KNOWLEDGE-INTENSE PROCESSES

Authors:

Stephan Lukosch, Dirk Veiel and Jörg M. Haake

Abstract: Knowledge workers solve complex problems. Their work seems not to be routinisable because of the unique results and constellation of actors involved. For distributed collaboration knowledge workers need many different tools, which leads to knowledge dispersed over different locations, artifacts, and systems. Contextbased adaptation can be used to support teams by shared workspace environments best meeting their needs. We propose an ontology representing context in a shared workspace environment, and a conceptual architecture for context sensing, reasoning, and adaptation. We report on first experiences demonstrating the applicability of our approach and give an outlook on directions of future work.

Paper Nr: 301
Title:

DYNAMIC MULTIMEDIA ENVIRONMENT BASED ON REALTIME USER EMOTION ASSESSMENT - Biometric User Data towards Affective Immersive Environments

Authors:

Vasco Vinhas, Daniel Castro Silva, Eugénio Oliveira and Luís Paulo Reis

Abstract: Both the academic and industry sectors have increased their attention and investment to the fields of Affective Computing and immersive digital environments, the latter imposing itself as a reliable domain, with increasingly cheaper hardware solutions. With all this in mind, the authors envisioned an immersive dynamic digital environment tied with automatic real-time user emotion assessment through biometric readings. The environment consisted in an aeronautical simulation, with internal variables such as flight plan, weather conditions and maneuver smoothness dynamically altered by the assessed emotional state of the user, based on biometric readings, including galvanic skin response, respiration rate and amplitude and phalange temperature. The results were consistent with the emotional states reported by the users, with a success rate of 78%.

Paper Nr: 302
Title:

USERS NEEDS FOR COLLABORATIVE MANAGEMENT IN EMERGENCY INFORMATION SYSTEMS

Authors:

Teresa Onorati, Alessio Malizia, Paloma Díaz and Ignacio Aedo

Abstract: The management of an emergency is a cooperative work that involves people from different areas and different roles. Collaborative tools are potentially useful for solving emergency situations but their utility depends on the emergency workers’ needs. In this paper, we describe an empirical study based on surveys and interviews that have been done with users to study how to improve the collaborative functionalities of an existing system used for cooperating and sharing resources among different Spanish Emergency Management governmental agencies. The goal of the study was to understand how emergency workers cooperate in real emergencies and the kind of tools they are actually using, as well as to identify potential strategies and technologies to improve the level of computer-supported collaboration.

Paper Nr: 321
Title:

A STUDY ON THE USE OF GESTURES FOR LARGE DISPLAYS

Authors:

António Neto and Carlos Duarte

Abstract: Large displays are becoming available to larger and larger audiences. In this paper we discuss the interaction challenges faced when attempting to transfer the classic WIMP design paradigm from the desktop to large wall-sized displays. We explore the field of gestural interaction on large screen displays, conducting a study where users are asked to create gestures for common actions in various applications suited for large displays. Results show how direct manipulation through gestural interaction appeals to users for some types of actions, while demonstrating that for other types gestures should not be the preferred interaction modality.

Paper Nr: 329
Title:

ON COLLABORATIVE SOFTWARE FOR WEB COMMUNITIES EVALUATION - A Case study

Authors:

Laura S. García, Dayane F. Machado, Juliano Duarte, Alexandre I. Direne, Marcos S. Sunye, Marcos A. Castilho, Luis C. E. de Bona and Fabiano Silva

Abstract: Collaborative software, and more specifically social software, must provide its users with not only a good application interface, but also - and more importantly - with easy and direct contact with other users. Within the field of collaborative software, we chose Orkut R as our object of evaluation, particularly in terms of the following communication tools: communities, messages and scrapbook. The research consisted, initially, of the evaluation of the abovementioned tools and, secondly, of the assessment of our method itself and its ability to appraise the inherent features of this kind of software. In the present paper we will introduce and describe the method upon which we based our assessment. In addition to that, we will justify the choice of this particular method and discuss the results obtained.

Paper Nr: 352
Title:

SIMULATION OF FOREST EVOLUTION - Effects of Environmental Factors to Trees Growth

Authors:

Jing Fan, Xiao-yong Sun, Ying Tang and Tian-yang Dong

Abstract: Out of the complexity and variety of plant communities, it is a challenging task to simulate the structure and dynamics of plant communities. In this paper we simulate and visualize the evolution of forests by the tree growth model influenced by the environmental factors. The environmental factors we considered include illumination, terrains and resource competition among trees. We develop our tree growth model based on the forest gap model by effectively incorporating the above environmental factors. The system is implemented with Visual C++ 6.0 and OpenGL. We compare the growth of trees (their heights and DBHs) which are of different ages or located in different regions. We also show changes of trees distribution within certain landscape for a long period of time (more than two hundred years). The illuminating and interesting experimental results show that our simulation technique is effective.

Paper Nr: 370
Title:

INFORMATION SYSTEM CUSTOMIZATION - Toward Participatory Design and Development of the Interaction Process

Authors:

Daniela Fogli and Loredana Parasiliti Provenza

Abstract: This paper proposes the adoption of human-computer interaction methods to address some of the problems related to the customization of information systems, and particularly of enterprise resource planning systems. The paper specifically describes a multi-facet approach to participatory design and development of information systems to build the dialogue between the information system and its users. It encompasses i) a specification framework for representing and translating the different perspectives of the members of the design team, including the end users’ perspective, ii) a methodology for collaborative design of the interaction process, and iii) a set of guidelines to carry out the development activities.

Paper Nr: 380
Title:

DEFINING A WORKFLOW PROCESS FOR TEXTUAL AND GEOGRAPHIC INDEXING OF DOCUMENTS

Authors:

Nieves R. Brisaboa, Ana Cerdeira-Pena, Miguel R. Luaces and Diego Seco

Abstract: Many public organizations are working on the construction of spatial data infrastructures (SDI) that will enable them to share their geographic information. However, not only geographic data are managed in these SDIs, and, in general, in Geographic Information Systems (GIS), but also many textual documents must be stored and retrieved (such as urban planning permissions and administrative files). Textual index structures must be integrated with GIS in order to provide an efficient access to these documents. Furthermore, many of these documents include geographic references within their texts. Therefore, queries with geographic scopes should be correctly answered by the index structure and the special characteristics of these geographic references, due to their spatial nature, should be taken into account. We present in this paper a workflow process that allows a gradual and collaborative creation of a document repository. These documents can be efficiently retrieved using queries regarding their texts and regarding the geographic references included within them. Moreover, the index structure and the supported query types are briefly described.

Paper Nr: 385
Title:

MEASURING COORDINATION GAPS OF OPEN SOURCE GROUPS THROUGH SOCIAL NETWORKS

Authors:

Szabolcs Feczak and Liaquat Hossain

Abstract: In this paper, we argue that coordination gaps, such as communication issues and task dependencies have significant impact on performance of work group. To address these issues, contemporary science suggests optimising links between social aspects of society and technical aspects of machines. A framework is proposed to describe social network structure and coordination performance variables with regards to distributed coordination during bug fixing in the Open Source domain. Based on the model and the literature reviewed, we propose two propositions—(i) level of interconnectedness has a negative relation with coordination performance; and, (ii) centrality social network measures have positive relation with coordination performance variables. We provide empirical analysis by using a large sample of 415 open source projects hosted on SourceForge.net. The results suggest that there is relationship between interconnectedness and coordination performance and centrality measures were found to have positive relationships with the performance variables of coordination measures.

Paper Nr: 413
Title:

INTELLIGENT AUTHORING TOOLS FOR ENHANCING MASS CUSTOMIZATION OF e-SERVICES - The smarTag Framework

Authors:

Panagiotis Germanakos, Nikos Tsianos, Zacharias Lekkas, Costas Mourlas, Mario Belk and George Samaras

Abstract: Mass customization should be more than just configuring a specific component (hardware or software), but should be seen as the co-design of an entire system, including services, experiences and human satisfaction at the individual as well as at the community level. The main objective of this paper is to introduce a framework for the automatic reconstruction of Web content based on human factors. Human factors and users’ characteristics play the most important role during the entire design and implementation of the framework which has the inherent ability to interact with its environment and the user and transparently adapt its behaviour using intelligent techniques, reaching high levels of usability, user satisfaction, effectiveness and quality of service presentation. The initial results of the evaluation have proven that the proposed framework do not degrade the efficiency (in terms of speed and accuracy) during the Web content adaptation process.

Paper Nr: 446
Title:

ELECTRONIC RECORDS MANAGEMENT SYSTEMS - The Human Factor

Authors:

Johanna Gunnlaugsdottir

Abstract: The purpose of this paper is to present the findings of a research conducted in Iceland during the period 2001-2005 and in 2008 on how employees view their use of Electronic Records Management Systems (ERMS). Qualitative methodology was used. Four organizations were studied in detail and other four provided a comparison. Open-ended interviews and participant observations were the basic elements of the study. The research discovered the basic issues in the user-friendliness of ERMS, the substitutes that employees turned to if they did not welcome ERMS, and how they felt that their work could be shared and observed by others. Employees seemed to regard ERMS as a groupware for constructive group work and not as an obtrusive part of a surveillance society. The research indicated training as the most important factor in making employees confident in their use of ERMS. The research identifies that most important implementation factors and the issues that must be dealt with to make employees more content, confident and proficient users of ERMS.

Paper Nr: 485
Title:

THE IMPACT OF INTERFACE ASPECTS ON INTERACTIVE MAP COMMUNICATION - An Evaluation Methodology

Authors:

Lucia Peixe Maziero, Cláudia Robbi Sluter and Laura Sanchèz Garcia

Abstract: In this paper, we will present an analytical and methodological procedure to evaluate the interfaces of Inter-active Maps. The main aims of one such evaluation is to (i) identify the essential aspects of these interfaces, (ii) investigate their influence on the communication with users and, based on this, (iii) set directives to guide the design of interfaces of future Interactive Maps. The process of evaluation leads to a detailed analysis of both the interface and the interaction itself. In order to do so, the process consists of the analysis of the essential elements of the interfaces, the evaluation of these aspects in relation to the users and, finally, the study of the results obtained. The results mainly refer to significant information on those aspects of the interfaces which, in turn, concern the necessary resources to both the interaction itself and the functionalities that Interactive Maps provide.

Paper Nr: 490
Title:

SCENARIO-BASED DESIGN - AN ESSENTIAL INSTRUMENT FOR AN INNOVATIVE TARGET APPLICATION - Case Report

Authors:

L. S. García, A. I. Direne, M. A. Castilho, L. E. Bona, F. Silva and M. S. Sunye

Abstract: Scenario-based design is a largely accepted method within the literature on Human-Computer Interaction and, for certain cases, also within the literature on Software Engineering. However, the lack of integration between these two areas, in addition to the lack of attention paid to the actual (and still quite infrequent) use of scenario-based design, stresses the need for increased emphasis on the relevance of scenario-based design applied to projects of truly innovative technological artefacts. In the present paper, we will present a case report whereby the above mentioned method, when applied to problem-analysis, led to the project of a differential user-interface environment when compared to the human process prior to the introduction of the computational application.

Paper Nr: 506
Title:

WEB FORM PAGE IN MOBILE DEVICES - Optimization of Layout with a Simple Genetic Algorithm

Authors:

Luigi Troiano, Cosimo Birtolo, Roberto Armenise and Gennaro Cirillo

Abstract: Filling out a form on mobile devices is generally harder than on other terminals, due to the reduced keyboard and display size, entailing a higher fatigue and limiting the user experience. A solution to this problem can be based on reducing the input effort required to the user by auto-completion, and re-organizing the fields in order to provide first those with a higher prediction power. In this paper we assume to be able to predict the user input and we optimize the fields layout aiming at reducing on average the input actions.

Paper Nr: 559
Title:

INTEGRATING VR IN AN ENGINEERING COLLABORATIVE PROBLEM SOLVING ENVIRONMENT

Authors:

Ismael H. F. dos Santos, Alberto Raposo and Marcelo Gattass

Abstract: We present an environment for executing engineering simulations and visualizing results in a Virtual Environment. The work is motivated by the necessity of finding effective solutions for collaboration of team workers during the execution of complex Petroleum Engineering projects. By means of a Scientific Workflow Management System users are able to orchestrate the execution of different simulations as workflow tasks that can be arranged in many ways according to project requirements. Within a workflow, as its last step, the most interesting cases can be selected for visualization in a distributed collaborative session.

Paper Nr: 595
Title:

BACK CHANNEL IN INTERACTIVE DIGITAL TELEVISION SYSTEMS: STRATEGIES FOR PROTOTYPING APPLICATIONS USING AN INTERACTIVE SERVICE PROVIDER - Internet Computing - Interactive and Multimedia Web Applications

Authors:

João Benedito dos Santos Junior, João Carlos de Moraes Morselli Junior, Iran Calixto Abrão, Fabiano Costa Teixeira, Gabriel Massote Prado, Paulo Muniz de Ávila, Mateus dos Santos and Rinaldi Fonseca do Nascimento

Abstract: This work, developed at the Interactive Digital Television Lab of the PUC Minas (Brazil), aims to present strategies for implementation of a prototype of a sufficient platform for building of Interactive Service Provider (ISP), which can store, analyze and generate reports based on information derived from the interaction of viewers with applications in Digital Interactive Television, characterized by the use of a back channel via Internet Protocol (IP). In case of brazilian scenario (Brazilian Digital Television System), the development of a ISP platform is based-on the experience accumulated with the development of the platform JiTV (Java Interactive Television), which includes the production in broadcaster enterprise, transmition over communication network and receiving on access terminal of the viewer, increasing for the use of back channel to interactivity actions.

Paper Nr: 628
Title:

HOW CAN A QUANTUM IMPROVEMENT IN PERSONAL AND GROUP INFORMATION MANAGEMENT BE REALIZED?

Authors:

Roger Tagg and Tamara Beames

Abstract: A number of authors have pointed out that the IT technology used by individuals and groups for general information work support has not advanced very far in the last decade. At the same time the level of information and cognitive overload on individuals has continued to rise. Research has been done in several areas (e.g. text mining and categorization, email threads etc) but a significant improvement in everyday tools has yet to be seen. This paper addresses the current problems, describes a range of issues that need to be addressed, and discusses how it might be possible in the future to advance to a new level of IT support.

Paper Nr: 639
Title:

PORTUGUESE WEB ACCESSIBILITY - Portuguese Enterprises Websites Accessibility Evaluation

Authors:

José Martins, José Cruz and Ramiro Gonçalves

Abstract: The use of the web is quickly spreading to the majority of the society. In many countries the use of the web in government services, education and training, commerce, news, citizenship, health and entertainment is significantly increasing. The web is extremely important for the dissemination of information and the interaction between the various society elements. Due to this, it’s essential that the web presents itself accessible to all, including those with any kind of disability. An accessible web may help the handicapped citizens in their interaction with the society. With this in mind, an evaluation of the accessibility levels of the Portuguese websites is, essential, for an assumption on the availability of these websites to all disabled citizens.

Paper Nr: 644
Title:

AN INNOVATIVE MODEL OF TRANS-NATIONAL LEARNING ENVIRONMENT FOR EUROPEAN SENIOR CIVIL SERVANTS - Organizational Aspects and Governance

Authors:

Nunzio Casalino

Abstract: The purpose of the study will be to investigate the benefits of the introduction of e-learning and of a specific online environment in the training process of European civil servants. It describes the final results and the organisational impact of a first pilot training course combining 24 hours of e-learning and 27 hours (one week) of in-class courses. For each module, the e-learning preparation provided general training contents to enhance participants background necessary for in-class sessions. The project implemented a pilot to demonstrate the effectiveness of the overall system (applications, contents and organizational aspects), to promote the use of e-learning in the EU Public Administration field. After one year, the project concluded its pilot phase and the results will be analyzed. With a view to stimulating co-operation and the exchange of best practices in Europe, its purpose is to build and test an innovative model of trans-national networking, thanks to the active involvement of European schools and institutes of Public Administration.

Posters
Paper Nr: 74
Title:

AFFECTIVE ALGORITHM TO POLARIZE CUSTOMER OPINIONS

Authors:

Domenico Consoli, Claudia Diamantini and Domenico Potena

Abstract: Human interact with other people and exchange reviews and ideas via web. With the explosion of Web 2.0 platforms such as blogs, discussion forums, peer-to-peer networks, and various other types of social media, consumers share, on the web, their opinions regarding any product/service. Opinions give information about how product/service and reality in general is perceived by other people. Emotional needs are associated with the psychological aspects of product ownership. The customer when writes his reviews on a product/service transmits emotions in the message that he/she feels first and after purchasing the product. For the enterprise understanding customer emotional needs is vital for predicting and influencing their purchasing behaviour. In this paper, we polarize, with original algorithm, customer opinions basing on emotional indexes that are used for decipher, in affective key, facial expressions and emotional lexicon.

Paper Nr: 87
Title:

USER ACCEPTANCE OF SELF-SERVICE TECHNOLOGIES - An Integration of the Technology Acceptance Model and the Theory of Planned Behavior

Authors:

Chiao-Chen Chang, Wei-Lun Chang and Yang-Chieh Chin

Abstract: This study examines what may affect consumers’ intention to use a self-service technology (SST). The objective of this study is to advance our understanding on the intention to use SSTs by comparing and integrating the theory of planned behaviour (TPB) and the technology acceptance model (TAM) as they relate to this issue. Data was collected from 280 adult consumers, and a structural equation modelling approach was employed to test the hypotheses. Although attitude, subjective norm, perceived usefulness have direct positive relationships to behavioural intention to use a SST, perceived behavioural control plays the most important role in explaining the intention to use SSTs. We conclude with managerial implications and directions for future research.

Paper Nr: 199
Title:

KEEPING TRACK OF HOW USERS USE CLIENT DEVICES - An Asynchronous Client-Side Event Logger Model

Authors:

Vagner Figuerêdo de Santana and Maria Cecilia Calani Baranauskas

Abstract: Web Usage Mining usually considers server logs as a data source for collecting patterns of usage data. This solution presents limitations when the goal is to represent how users interact with specific user interface elements, since this approach may not have detailed information about users’ actions. This paper presents a model for logging client-side events and an implementation of it as a websites evaluation tool. By using the model presented here, miner systems can capture detailed Web usage data, making possible a fine-grained examination of Web pages usage. In addition, the model can help Human-Computer Interaction practitioners to log client-side events of mobile devices, set-top boxes, Web pages, among other artefacts.

Paper Nr: 218
Title:

INVESTIGATIONS INTO ENHANCED ALERT MANAGEMENT FOR COLLISION AVOIDANCE IN SHIP-BORNE INTEGRATED NAVIGATION SYSTEMS

Authors:

Michael Baldauf, Knud Benedict, Florian Motz and Sabine Höckel

Abstract: High sophisticated integrated navigation systems are installed on the ship navigational bridges to support the operator of modern container ships. The integrated systems should assist the captains, navigation officers and the pilots to avoid any dangerous situation when sailing from port of departure to the port of destination. Numerous Human Machine Interfaces require interaction to control the voyage in every situation under all possible circumstances. However, with respect to shipping statistics collisions and groundings are major risks. This paper deals with investigations into the alert management on board modern ships and potential approach to reduce the number of alarms. Results gained during several field studies on board ships are presented. Based on these results the draft of a concept for reducing the high frequency of collision warnings to be implemented into the navigation systems on board is discussed. First preliminary results are introduced.

Paper Nr: 349
Title:

A RECEIVER UNIT OF PHOTODETECTORS FOR A LASER POINTER AS A WIRELESS CONTROLLER

Authors:

Jaemyoung Lee

Abstract: We propose a wireless receiver unit of photodetectors for a commercially available laser pointer. A controller in the receiver unit drives a multimedia player in accordance with the scanning direction of the laser pointer over photodetectors. A control algorithm is proposed for control of the multimedia player. We believe that the proposed receiver unit and the control algorithm for a laser pointer can be applied to other systems.

Paper Nr: 366
Title:

CROSSMODAL PERCEPTION OF MISMATCHED EMOTIONAL EXPRESSIONS BY EMBODIED AGENTS

Authors:

Yu Suk Cho, Ji He Suk and Kwang Hee Han

Abstract: Today an embodied agent generates a large amount of interest because of its vital role for human-human interactions and human-computer interactions in virtual world. A number of researchers have found that we can recognize and distinguish between emotions expressed by an embodied agent. In addition many studies found that we respond to simulated emotions in a similar way to human emotion. This study investigates interpretation of mismatched emotions expressed by an embodied agent (e.g. a happy face with a sad voice). The study employed a 4 (visual: happy, sad, warm, cold) X 4 (audio: happy, sad, warm, cold) within-subjects repeated measure design. The results suggest that people perceive emotions not depending on just one channel but depending on both channels. Additionally facial expression (happy face vs. sad face) makes a difference in influence of two channels; Audio channel has more influence in interpretation of emotions when facial expression is happy. People were able to feel other emotion which was not expressed by face or voice from mismatched emotional expressions, so there is a possibility that we may express various and delicate emotions with embodied agent by using only several kinds of emotions.

Paper Nr: 447
Title:

COMMON SENSE KNOWLEDGE BASE EXPANDED BY AN ONLINE EDUCATIONAL ENVIRONMENT

Authors:

Junia Coutinho Anacleto, Alexandre Mello Ferreira, Eliane Nascimento Pereira, Izaura Maria Carelli, Marcos Alexandre Rose Silva and Ana Luiza Dias

Abstract: The computers games use on educational field have been growing as a potential tool to facilitate the teaching-learning process. In the “What is it?” environment, presented in this article, the teacher can be co-author of a guess game based on cards, in which, the common sense knowledge support the teacher to be aware of students’ culture and necessities. The environment also proposes a way to collect common sense statements, where engines on editor’s module and player’s module store all user interaction and combine this information to make new relations into Brazilian Open Mind Common Sense project (OMCS-Br) knowledge base. A study case was done by teachers and students from two different public schools, whose result point out the potential of this new way to collect common sense statements naturally through a web game.

Paper Nr: 456
Title:

AN ANALYSIS OF THE DIFFUSION OF INFORMATION TECHNOLOGY IN EDUCATION - Towards Europe’s Information Society

Authors:

Laura Asandului, Ciprian Ceobanu and Alina Ionescu

Abstract: The accelerated development of the information and communication technologies determined educational institutions and companies to implement alternatives to the traditional teaching methods. The new literacy determines the e-learning competencies. The paper concerns an analysis concerning the expenditure for information technologies, the use of computer and of Internet, computer and Internet skills, and also e-learning in the EU countries. The results showed that there are disparities among EU member states regarding the extent and the perspectives for the developing of e-learning.

Paper Nr: 626
Title:

WHAT DRIVES PEOPLE TO PLAY WII GAME? - The Trend of Human-Computer Interaction on Video Game Design

Authors:

Chih-Hung Wu, Ching-Cha Hsieh, Cheng-Chieh Huang and Chin-Chia Hsu

Abstract: Nintendo released the fifth home videogame console, Wii in November 2006, and sold over six million units in six months. What drives people to play Wii game? What is the magic? The TAM model has empirically tested why people accept some technologies, but most constructs are measured for workspace contexts. This research argues that if we want to understand that why people are attracted to the leisure technology, such as Wii, should take consider the leisure social psychology constructs. We consider that people playing the leisure technology will experience a subjective psychological state of leisure, not only objectively tangible activities. Thus, the intrinsic motivations of leisure will influence the attitudes and intentions of people for using the Wii Game. This research proposes a new leisure technology acceptance model, then using questionnaires, streets surveys, and SEM to collect and verify data. This study provides a new perspective and direction to study leisure technology and human-computer interaction (HCI).

Paper Nr: 637
Title:

TEXT DRIVEN LIPS ANIMATION SYNCHRONIZING WITH TEXT-TO-SPEECH FOR AUTOMATIC READING SYSTEM WITH EMOTION

Authors:

Futoshi Sugimoto

Abstract: We developed a lips animation system that is simple and suits the characteristics of Japanese. It adopts phoneme context that is usually used in speech synthesis. It only needs a small database that contains tracing data of movement of lips for each phoneme context. Through an experiment using subjects in which the subjects read a word from lips animation, we got result as correct as from real lips.

Special Session on Doctoral Research

Posters
Paper Nr: 5
Title:

IDENTIFICATION OF CRITICAL SUCCESS FACTORS TO ERP PROJECT MANAGEMENT - An Application of Grey Relational Analysis and Analytic Hierarchy Process

Authors:

Chandra Sekhar Dronavajjala, Sreeraju Nichenametla and Rajendra Sahu

Abstract: ERP system implementations are complex undertakings and many of them are unsuccessful. It is therefore important to find out what the critical success factors, or CSFs, are, that drive ERP project success. In the present article we identified 17 CSFs from the literature survey and the responses of questionnaire from various targeted respondents which include some of the International Inc.’s of ERP Vendors, ERP Customers and ERP implementing companies. Based on the ground theory of analysis these 17 CSFs are grouped with regard to Project Management (PM) knowledge areas of time, quality, cost, scope and expectation. And finally analyzed the questionnaire responses using Grey Relational Analysis and Analytic Hierarchy Process (AHP) for finding the CSFs contribution to the success of ERP project management. We further analyzed the set of questionnaire responses for a group that is unable to reach a compromise to make a decision.

Paper Nr: 8
Title:

MANAGING ENGINEERING CHANGES ALONG WITH NEW PRODUCT DEVELOPMENT

Authors:

Weilin Li

Abstract: This proposed research is to develop a process model for managing Engineering Changes (ECs) while other New Product Development (NPD) activities are being carried out in a company. The discrete-event simulation model incorporates Engineering Change Management (ECM) into an NPD environment by allowing ECs to compete for limited resources with regular NPD activities. The goal is to examine how the size and frequency of NPD as well as ECM, NPD process structure (in terms of overlapping and department interaction), and the policies one organization employs (such as resource using priority and project cancellation policy) affect lead time and productivity of both NPD and ECM. Decision-making suggestions for minimum EC impact are drawn from an overall enterprise system level perspective based on the simulation results.

Paper Nr: 9
Title:

PERFORMANCE OVERHEAD OF ERP SYSTEMS IN PARAVIRTUALIZED ENVIRONMENTS

Authors:

André Bögelsack

Abstract: Virtualization is a big trend in current IT world and is used intensively in today’s computing centres. But little is known about what happens to the performance of computer systems when running in virtual environments. This work focuses on the performance aspect especially in the field of Enterprise Resource Planning systems (ERP). Therefore, this work utilizes a quantitative approach by using laboratory experiments to measure the performance differences between a virtualized and non-virtualized ERP system. First, on basis of a literature review a performance measurement framework will be developed to provide a comprehensive guideline how to measure the performance of an ERP system in a virtualized environment. Second, the performance measurement focuses on the overhead in CPU, memory and I/O intensive situations. Third, the focus lays on a root cause analysis. Gained results will be analyzed and interpreted to give recommendations for further development of both ERP system and virtualization solution. The outcome may be useful for further computing centre design when introducing new ERP systems and service delivery concepts like Software as a Service (SaaS) in a virtualized or non-virtualized environment.

Paper Nr: 10
Title:

IMPROVING THE USABILITY OF ERP SYSTEMS THROUGH THE APPLICATION OF ADAPTIVE USER INTERFACES

Authors:

Akash Singh and Janet Wesson

Abstract: A need exists to improve the overall usability of enterprise resource planning systems. Current research has shown that the user interfaces of these systems are too complex and difficult to use. Enterprise resources planning systems for small enterprises are currently too rigid and are not flexible enough to match the constantly changing business landscape of small enterprises. This paper proposes the use of adaptive user interfaces as a means to improve the overall usability of enterprise resource planning systems for small enterprises. Adaptive user interfaces are capable of improving system usability by reducing user interface complexity and improving the overall user experience. This could provide small enterprises with the flexibility and adaptability that they require from enterprise resource planning systems.
Paper Nr: 11
Title:

INTEROPERABI


LITY IN PEDAGOGICAL eLEARNING SERVICES

Authors:

Ricardo Queirós

Abstract: The ultimate goal of this research plan is to improve the learning experience of students through the combination of pedagogical eLearning services. Service oriented architectures are already being used in eLearning but in this work the focus is on services of pedagogical value, rather then on generic services adapted from other business systems. This approach to the architecture of eLearning platforms raises challenges addressed by this work, namely: conceptual modeling of the pedagogical eLearning services domain; interoperability and coordination of pedagogical eLearning service; conversion of existing eLearning systems to pedagogical services; adaptation of eLearning services to individual learners. An improved eLearning platform will incorporate learning tools adequate to the domains it covers and will focus on the individual learner that uses it. With this approach we expect to raise the pedagogical value of eLearning platforms.

Workshops

PRIS 2009

Full Papers
Paper Nr: 4
Title:

Clustering and Density Estimation for Streaming Data using Volume Prototypes

Authors:

Maiko Sato, Mineichi Kudo and Jun Toyama

Abstract: The authors have proposed volume prototypes as a compact expression of a huge data or a data stream, along with a one-pass algorithm to find them. A reasonable number of volume prototypes can be used, instead of an enormous number of data, for many applications including classification, clustering and density estimation. In this paper, two algorithms using volume prototypes, called VKM and VEM, are introduced for clustering and density estimation. Compared with the other algorithms for such a huge data, we showed that our algorithms were advantageous in speed of processing, while keeping the same degree of performance, and that both applications were available from the same set of volume prototypes.

Paper Nr: 5
Title:

Estimating the Number of Segments of a Turn in Dialogue Systems

Authors:

Vicent Tamarit and Carlos-D. Martínez-Hinarejos

Abstract: An important part of a dialogue system is the correct labelling of turns with dialogue-related meaning. This meaning is usually represented by dialogue acts, which give the system semantic information about user intentions. This labelling is usually done in two steps, dividing the turn into segments, and classifying them into DAs. Some works have shown that the segmentation step can be improved by knowing the correct number of segments in the turn before the segmentation. We present an estimation of the probability of the number of segments in the turn. We propose and evaluate some features to estimate the probability of the number of segments based on the transcription of the turn. The experiments include the SwitchBoard and the Dihana corpus and show that this method estimates correctly the number of segments of the 72% and the 78% of the turns in the SwitchBoard corpus and the Dihana corpus respectively.

Paper Nr: 6
Title:

On the Automatic Classification of Reading Disorders

Authors:

Andreas Maier, Caroline Parchmann, Tobias Bocklet, Florian Hönig, Oliver Kratz, Stefanie Horndasch, Elmar Nöth and Gunther Moll

Abstract: In this paper, we present an automatic classification approach to identify reading disorders in children. This identification is based on a standardized test. In the original setup the test is performed by a human supervisor who measures the reading duration and notes down all reading errors of the child at the same time. In this manner we recorded tests of 38 children who were suspected to have reading disorders. The data was confronted to an automatic system which employs speech recognition to identify the reading errors. In a subsequent classification experiment — based on the speech recognizer’s output and the duration of the test— 94.7% of the children could be classified correctly.

Paper Nr: 8
Title:

Recognition-based Segmentation of Arabic Handwriting

Authors:

Ashraf Elnagar and Rahima Bentrcia

Abstract: Several segmentation approaches proposed in the past decades for Arabic handwritings suffer from over-segmentation. This problem decomposes a single letter into small strokes. The aim of this work is to handle this problem using Artificial Neural Networks with a set of combination rules to keep the correct strokes (letters) and combine the over-segmented ones to intact letters in a correct way. After word segmentation, the resulting segments are normalized. Then, a set of features was extracted from each segment and passed to Artificial Neural Network to be recognized. Finally, proposed combination rules were applied to unrecognized strokes and to specific recognized letters. The success rate of the experimental results exceeds 95%.

Paper Nr: 9
Title:

RefLink: An Interface that Enables People with Motion Impairments to Analyze Web Content and Dynamically Link to References

Authors:

Margrit Betke and Smita Deshpande

Abstract: In this paper, we present RefLink, an interface that allows users to analyze the content of web page by dynamically linking to an online encyclopedia such as Wikipedia. Upon opening a webpage, RefLink instantly provides a list of terms extracted from the webpage and annotates each term by the number of its occurrences in the page. RefLink uses the text-to-speech interface to read out the list of terms. The user can select a term of interest and follow its link to the encyclopedia. RefLink thus helps the users to perform an informed and efficient contextual analysis. Initial user testing suggests that RefLink is a valuable web browsing tool, in particular for people with motion impairments, because it greatly simplifies the process of obtaining reference material and performing contextual analysis.

Paper Nr: 12
Title:

PCA Supervised and Unsupervised Classifiers in Signal Processing

Authors:

Catalina Cocianu, Luminita State, Panayiotis Vlamos, Constantin Doru and Corina Sararu

Abstract: The aims of the research reported in this paper are to investigate the potential of principal directions-based approach in supervised and unsupervised frameworks. The structure of a class is represented in terms of the estimates of its principal directions computed from data, the overall dissimilarity of a particular object with a given class being given by the “disturbance” of the structure, when the object is identified as a member of this class. In case of unsupervised framework, the clusters are computed using the estimates of the principal directions. Our attempt uses arguments based on the principal components to refine the basic idea of k-means aiming to assure soundness and homogeneity to the resulted clusters. Each cluster is represented in terms of its skeleton given by a set of orthogonal and unit eigen vectors (principal directions) of sample covariance matrix, a set of principal directions corresponding to the maximum variability of the “cloud” from metric point of view. A series of conclusions experimentally established are exposed in the final section of the paper.

Paper Nr: 13
Title:

Semi-Supervised Least-Squares Support Vector Classifier based on Virtual Leave One Out Residuals

Authors:

Stanislaw Jankowski, Zbigniew Szymański and Ewa Piatkowska-Janko

Abstract: We present a new semi-supervised learning system based on least-squares support vector machine classifier. We apply the virtual leave-one-out residuals as criterion for selection of the most influential data for label switching test. The analytic form of the solution enables to obtain a high gain of the computational cost. The quality of the method was tested on the artificial data set – two moons problem and on the real signal-averaged ECG data set. The correct classification score is better as compared to other methods.

Paper Nr: 14
Title:

Automatic Analysis of Historical Manuscripts

Authors:

Costantino Grana, Daniele Borghesani and Rita Cucchiara

Abstract: In this paper a document analysis tool for historical manuscripts is proposed. The goal is to automatically segment layout components of the page, that is text, pictures and decorations. We specifically focused on the pictures, proposing a set of visual features able to identify significant pictures and separating them from all the floral and abstract decorations. The analysis is performed by blocks using a limited set of color and texture features, including a new texture descriptor particularly effective for this task, namely Gradient Spatial Dependency Matrix. The feature vectors are processed by an embedding procedure which allows increased performance in later SVM classification.

Paper Nr: 15
Title:

Combining Data Clusterings with Instance Level Constraints

Authors:

João M. M. Duarte, Ana L. N. Fred and F. Jorge Duarte

Abstract: Recent work has focused the incorporation of a priori knowledge into the data clustering process, in the form of pairwise constraints, aiming to improve clustering quality and find appropriate clustering solutions to specific tasks or interests. In this work, we integrate must-link and cannot-link constraints into the cluster ensemble framework. Two algorithms for combining multiple data partitions with instance level constraints are proposed. The first one consists of a modification to Evidence Accumulation Clustering and the second one maximizes both the similarity between the cluster ensemble and the target consensus partition, and constraint satisfaction using a genetic algorithm. Experimental results shown that the proposed constrained clustering combination methods performances are superior to the unconstrained Evidence Accumulation Clustering.

MSVVEIS 2009

Full Papers
Paper Nr: 10
Title:

Use Case Maps as an Aid in the Construction of a Formal Specification

Authors:

Cyrille Dongmo and John A. van der Poll

Abstract: A Use Case Map (UCM) is a scenario-based visual notation facilitating the requirements definition of complex systems. A UCM may be generated either from a set of informal requirements, or from use cases normally expressed in natural language. Natural languages are, however, inherently ambiguous and as a semi-formal notation, UCMs have the potential to bring more clarity into the functional description of a system. It may furthermore eliminate possible errors in the user requirements. The semi-formal notation of UCMs aims to show how things work generally, but is not suitable to reason formally about system behaviour. It is plausible, therefore, that the use of UCMs as an intermediate step may facilitate the construction of a formal specification. To this end this paper proposes a mechanism whereby a UCM may be translated into Object-Z.

Paper Nr: 11
Title:

Test Cases Generation for Nondeterministic Duration Systems

Authors:

Lotfi Majdoub and Riadh Robbana

Abstract: In this paper, we are interested in testing duration systems. Duration systems are an extension of real-time systems for which delays that separate events depend on the accumulated times spent by the computation at some particular locations of the system. We present a test generation method for nondeterministic duration systems that uses the approximation method. This method extends a model using an over approximation, the approximate model, containing the digitization timed words of all the real computations of the duration system. Then, we propose an algorithm that generates a set of test cases presented with a tree by considering a discrete time.

Paper Nr: 16
Title:

A Petri Net Based Approach for Modelling and Analyzing Interorganizational Workflows with Dynamic Structure

Authors:

Oana Otilia Prisecaru

Abstract: Interorganizational workflows represent a special type of workflows that involve more than one organization. In this paper, an interorganizational workflow will be modelled using a special class of nested Petri nets, dynamic interorganizational workflow nets (DIWF-nets). DIWF-nets can model interorganizational workflows in which some of the local workflows can be removed, during the execution of the workflow, due to exceptional situations. Our approach permits a clear distinction between the component workflows and the communication structure. The paper defines a notion of behavioural correctness (soundness) and proves this property is decidable for DIWF-nets.

Paper Nr: 20
Title:

Modeling the System Organization of Multi-Agent Systems in Early Design Stages with Coarse Design Diagrams

Authors:

Lawrence Cabac and Kolja Markwardt

Abstract: In this paper we propose to use a coarse system overview from the beginning of the analysis stage to better support the development team of multiagent systems in finding an architectural approach direction to the envisioned system. For this we propose to transfer the syntax of use cases modeling to the analysis / architectural design stage of designing preliminary roles and interactions. The reasons to use this is that modeling with use case syntax is lightweight, intuitive and well-known to most developers. We also present a plugin for RENEW which is capable of drawing use cases and generating code structures for multi-agent applications in the context of MULAN.

Paper Nr: 23
Title:

A Process-Oriented Tool-Platform for Distributed Development

Authors:

Kolja Markwardt, Lawrence Cabac and Christine Reese

Abstract: Many software projects today are executed geographically distributed with teams of developers, designers, testers, etc. in different countries all over the globe. This requires a development environment that allows easy and flexible adaptation to different kinds of teams and their processes. This paper presents an architecture for a distributed software development environment, that allows users to collaborate on flexible processes. The focus is put on providing distributed tools and organising the processes needed to successfully produce software with these tools.

Paper Nr: 25
Title:

Preliminary Design of an Agent-based System for Human Collaboration in Chemical Incidents Response

Authors:

Mihnea Scafeș and Costin Bădică

Abstract: In this paper we introduce initial design of an agent-based system to support human collaboration processes occurring in situations of chemical incidents response. The design process includes: (i) description of initial agent reasoning mechanisms using goals and plans; (ii) abstraction of external agent functionalities using tasks and services; (iii) organization of agents into communities. The design is illustrated by considering a realistic utilization scenario.

Short Papers
Paper Nr: 6
Title:

From Reactive to Deliberative Multi-agent Planning

Authors:

Ammar Mohammed and Ulrich Furbach

Abstract: Various researches approached hybrid automata to formally model and coordinate reactive multi-agent systems’ plans situated in a dynamic environment, where the time is critical. However in most of cases, reactivity in dynamic environments is not satisfactory. It is favorable that agents plan their behaviors according to some preference function. Most of current verification tools of hybrid automata are inadequate to model such agents’ plans. Therefore, this paper extends hybrid automata’s decisions making by means of utility functions on transitions. A scenario taken from supply chain management is demonstrated to show the paper’s approach. Analysis of agents’ plans are investigated using a constraint logic program implementation prototype.

Paper Nr: 7
Title:

Methods for Service Identification: A Criteria-based Literature Review

Authors:

René Boerner and Matthias Goeken

Abstract: Business-driven identification of services is a precondition for a successful implementation of service-oriented architectures (SOA). This paper compares existing identification methods retrieved from related work and discusses the shortcomings. Finally, it proposes a process-oriented method of service identification. This approach incorporates the business point of view, strategic and economic aspects as well as technical feasibility.

Paper Nr: 8
Title:

Layered Queuing Networks for Simulating Enterprise Resource Planning Systems

Authors:

Stephan Gradl, André Böegelsack, Holger Wittges and Helmut Krcmar

Abstract: As Enterprise Resource Planning systems (ERP) form the backbone of today’s business processes the stability and performance of those systems is vital to the whole company. In many cases less is known what happens to the performance of an ERP system when a patch is applied or changes are made to the ERP system. This paper presents an approach how to simulate Enterprise Resource Planning systems (ERP) before changes are made to the system. The approach involves the development of so called Layered Queuing Networks (LQN). To construct such a LQN the paper utilizes a trace in the ERP system to gather data about the internal ERP system’s architecture. These data is used to construct a section of the internal architecture of the ERP system. The ERP system’s architecture is transformed into a LQN and the LQN is simulated.

Paper Nr: 21
Title:

The Role of Testing in Agile and Conventional Methodologies

Authors:

Agustin Yagüe and Juan Garbajosa

Abstract: Testing in software engineering is a well established practice. Though the scope of testing may differ depending on the community, e.g. for some communities is a process in itself while for some other communities is a an activity or a task under verification and validation, many fundamental issues around testing are shared by all the communities. However, agile methodologies are emerging in the software engineering landscape and are changing the picture. For instance, in agile methodologies it may happen that code is written precisely to pass a test. Moreover, tests may replace the requirement specification. Therefore the concepts underlying test practice are different in conventional and agile approaches. This paper analyses these two different perspectives for testing, the conventional and the agile, and discusses some of the implications that these two different approaches may have in software engineering.

Paper Nr: 24
Title:

Multi Project Organization Optimization using Genetic Algorithm

Authors:

Sven Tackenberg and Sebastian Schneider

Abstract: Due to impatient customers and competitive threats, it has become increasingly important to shorten the lead time of development projects and to bring new products faster to the market. Furthermore, many organizations are faced with the challenge of planning and managing the simultaneous execution of multiple dependent projects under tight time and resource constraints. Within that kind of business environment, effective project management and scheduling is crucial to organizational performance. A genetic algorithm approach with a novel genotype and GP mapping operation is proposed to minimize the overall project duration and budget of multiple projects for a resource constrained multi project scheduling problem (RCMPSP) without violating inter-project resource constraints or intra-project precedence constraints. Stochastic rework of tasks, variable assignment of actors and stochastic makespan of a specific task are considered by the introduced GA. The proposed Genetic Algorithm is tested on scheduling problems with and without stochastic feedback. This GA demonstrates to provide a quick convergence to a global optimal solution regarding the multi-criteria objectives.

Posters
Paper Nr: 2
Title:

An Automatic Transformation of Event B Models into UML using an Interactive Inference Engine Thinker

Authors:

Leila Jemni Ben Ayed and Mohamed Nidhal Jelassi

Abstract: In this paper, we describe Thinker, which can be considered as an interactive based-rules inference engine that supports inference rule where for some selected concepts we can have different results. This is the case of interactive transformation approach generating simultaneously different concepts from initial ones. For example, the transformation of formal notations from semi-formal ones, especially the application of rule-based approach translating B abstract machines into UML class diagrams. In addition, Thinker allows us to select one solution from a set of proposed solutions and to modify previous selections if there an ambiguous choice. Our inference engine is also generic and can be used in more than one domain. By the translation from B to UML class diagram, we illustrate our tool.

Paper Nr: 9
Title:

Using SCADE for Decision Support in Dam Management

Authors:

María del Mar Gallardo, Pedro Merino, Laura Panizo and Antonio Linares

Abstract: Formal methods have been widely used to analyze discrete systems such as software. However, in recent years their field of application has been extended to deal with hybrid systems, that is, systems that combine both continuous and discrete behaviors. This paper presents the experimental use of formal methods in a new application domain, the management of dams. Such systems present hybrid behaviors which are not properly dealt with by classic numerical models. We use the tool SCADE, based on the formal language Lustre, to model, simulate and verify dam elements in order to obtain a Decision Support System (DSS) suitable for use by the dam operator.

Paper Nr: 14
Title:

Database Integrity in Integrated Systems

Authors:

José Francisco Zelasco and Judith Donayo

Abstract: This article makes a proposal, looking into certain aspects of Unified Modelling Language (UML) and Unified Process (UP) usage, when applied to the database integrity in integrated information systems. The usage of some tools and heuristics based on the gathered experience originated on the MERISE method evolution, gives place to comparative benefits while conceiving this type of informatics applications. The heuristic and employed tools consist of starting from a higher level of abstraction provided with inverse engineering and re-engineering instrumentation methods. The knowledge of a minimal data schema enables a global vision. The data integrity is assured by means of verifications between the minimal data diagram and the persistent data object, reducing the number of modifications done to the initially built subsystem data. Finally, the conceptual and organizational levels of treatment allow the comparison of the organization options, yielding naturally into the user manual and object design.

Paper Nr: 15
Title:

Making Use Case Slices Manage Variability in Aspect-based Product Line

Authors:

Satish Mahadevan Srinivasan and Mansour Zand

Abstract: Use case slice, clearly lacks composition mechanism due to which it is difficult to manage variabilities in Software Product Lines. A use case slice can only convey the design of a single member of a product line. Aspect-based modeling of use case slices look to be a promising solution but there are few issues within their composition mechanism. Aspect-based modeling of use case slices clearly lack a strong and familiar algebraic model and also fails to address precedence management issues among artifacts such as pointcuts and advices. This paper suggests a composition mechanism, for aspect-based implementation of use case slices, which would provide a familiar algebraic model and will resolve issues related to precedence management. In this paper we have discussed about a hypothetical aspect-based Expressions Product Line (EPL) and have shown how the use case slices can be used to model the variabilities in EPL.

Paper Nr: 17
Title:

Using UML Activity Diagrams and Event B for the Specification and the Verification of Workflow Applications

Authors:

Ahlem Ben Younes and Leila Jemni Ben Ayed

Abstract: This paper presents a specification and formal verification technique for workflow applications using UML Activity Diagrams (AD) and Event B. The workflow application is initially modeled graphically hierarchically with UML AD, then the resulting model is translated into Event B in order to check the correctness of workflow models (such as no deadlock, no livelock, fairness,.) automatically, using the B support tools. In this paper, we discuss the contributions and by an example of workflow application, we illustrate the proposed technique.

Paper Nr: 18
Title:

ImageNetDiff: Finding Differences in Models

Authors:

Lawrence Cabac, Kolja Markwardt and Jan Schlüter

Abstract: In this paper we propose a method and present a tool as plugin for RENEW that supports the process of discovery of differences in possibly conflicting versions of all kinds of supported diagrams. These diagrams can be either semiformal UML diagrams, Petri net models or simple JHotDraw drawings. Instead of searching for differences on the syntactical or even on the semantical level, we choose to find differences on the visual level.

Paper Nr: 19
Title:

Information Systems Configuration Analysis using Event-driven Computer Simulation

Authors:

Tomasz Walkowiak and Katarzyna Michalska

Abstract: The paper presents a method of analyzing dependability aspects of service oriented information systems. The system analysis is based on functional-dependability model suitable for system simulation. The model is described by XML Domain Modelling Language (XDML). The model is automatically transform into an input model for computer simulation. For event-driven simulation modified SSFNet simulator is used. Based on simulation results some dependability metrics are calculated: availability and response time of a business service. The paper presents service oriented information system model, method of its analysis with support of created tool that was tested on a exemplary system.

WOSIS 2009

Full Papers
Paper Nr: 2
Title:

Unconditionally Secure Authenticated Encryption with Shorter Keys

Authors:

Basel Alomair and Radha Poovendran

Abstract: Confidentiality and integrity are two main objectives of security systems and the literature of cryptography is rich with proposed techniques to achieve them. To satisfy the requirements of a wide range of applications, a variety of techniques with different properties and performances have appeared in the literature. In this work, we address the problem of confidentiality and integrity in communications over public channels. We propose an unconditionally secure authenticated encryption that requires shorter key material than current state of the art. By combining properties of the integer field Zp with the fact that the message to be authenticated is unknown to adversaries (encrypted), message integrity is achieved using a single modular multiplication. Against an adversary equipped with a single antenna, the adversary’s probability of modifying a valid message in a way undetected by the intended receiver can be made an absolute zero. After the description of the basic scheme and its detailed security analysis are completed, we describe an extension to the main scheme that can substantially reduce the required amount of key material.

Paper Nr: 3
Title:

SLSB: Improving the Steganographic Algorithm LSB

Authors:

Juan Jose Roque

Abstract: This paper presents a novel steganographic algorithm based on the spatial domain: Selected Least Significant Bits (SLSB). It works with the least significant bits of one of the pixel color components in the image and changes them according to the message’s bits to hide. The rest of bits in the pixel color component selected are also changed in order get the nearest color to the original one. This new method has been compared with others that work in the spatial domain and the great difference is the fact that the LSBs bits of every pixel color component are not used to embed the message, just those from pixel color component selected.

Paper Nr: 4
Title:

Security as a Service - A Reference Architecture for SOA Security

Authors:

Mukhtiar Memon, Michael Hafner and Ruth Breu

Abstract: Securing service-oriented systems is challenging, because like business services the security services are equally distributed in SOA systems. Enforcing security exclusively at the endpoints creates a significant security burden. Also, every endpoint has to implement the entire security infrastructure, which is an expensive approach. Currently, there is very little work done to separate security from service endpoints. We propose a Security As A Service (SAAS) approach, which shifts major security burden from service endpoints to dedicated and shared security services within a security domain. Security services are composed from components, and integrated based on the Service Component Architecture (SCA) model. In this contribution, we apply the SAAS paradigm to implement security for SECTISSIMO, which is a platform-independent framework for security modeling and implementation (M. Memon, M. Hafner, and R. Breu).

Paper Nr: 6
Title:

Deterministic Cryptanalysis of some Stream Ciphers

Authors:

P. Caballero-Gil, A. Fúster-Sabater and C. Hernández-Goya

Abstract: A new graph-based approach to edit distance cryptanalysis of some clock-controlled generators is here presented in order to simplify search trees of the original attacks. In particular, the proposed improvement is based on cut sets defined on some graphs so that only the most promising branches of the search tree have to be analyzed because certain shortest paths provide the edit distances. The strongest aspects of the proposal are: a) the obtained results from the attack are absolutely deterministic and b) many inconsistent initial states are recognized beforehand and avoided during search.

Paper Nr: 10
Title:

Enhancing Rights Management Systems Through the Development of Trusted Value Networks

Authors:

Víctor Torres, Jaime Delgado, Xavier Maroñas, Silvia Llorente and Marc Gauvin

Abstract: In this paper, we present an innovative architecture that enables the digital representation of original works and derivatives while implementing Digital Rights Management (DRM) with the aim of focusing on promoting trust within the multimedia content value networks rather than solely on content access and protection control. The system combines different features common in DRM systems such as licensing, content protection, authorisation and reporting together with innovative concepts, such as the linkage of original and derived content and the definition of potential rights. The transmission of reporting requests across the content value network combined with the possibility for authors to exercise rights over derivative works enables the system to determine automatically the percentage of income corresponding to each of the actors involved in different steps of the creation and distribution chain. The implementation consists of a web application which interacts with different external services plus a user application used to render protected content. It is currently publicly accessible for evaluation.

Paper Nr: 11
Title:

Risk-Aware Secure Supply Chain Master Planning

Authors:

Axel Schröpfer, Florian Kerschbaum, Dagmar Sadkowiak and Richard Pibernik

Abstract: Supply chain master planning strives for optimally aligned production, warehousing and transportation decisions across a multiple number of partners. Its execution in practice is limited by business partners’ reluctance to share their vital business data. Secure Multi-Party Computation can be used to make such collaborative computations privacy-preserving by applying cryptographic techniques. Thus, computation becomes acceptable in practice but for additional cost of complexity depending on the protection level chosen. Because not all data to be shared induces the same risk and demand for protection, we assess the risk of data elements individually and then apply an appropriate protection. This speeds up the secure computation and enables significant improvements in the supply chain.

Paper Nr: 17
Title:

An Enhanced Approach to using Virtual Directories for Protecting Sensitive Information

Authors:

Dongwan Shin and William Claycomb

Abstract: Enterprise directory services are commonly used in enterprise systems to store object information relating to employees, computers, contacts, etc. These stores can act as information providers or sources for authentication and access control decisions, and could potentially contain sensitive information. An insider attack, particularly if carried out using administrative privileges, could compromise large amounts of directory information. We present a solution for protecting directory services information from insider attacks using existing key management infrastructure and a new component called a Personal Virtual Directory Service. We show how impact to existing users, client applications, and directory services are minimized, and how we prevent insider attacks from revealing protected data. Additionally, our solution is supported by implementation results showing the impact to client performance and directory storage capacity.

Paper Nr: 19
Title:

MMSM-SME: Methodology for the Management of Security and its Maturity in SME

Authors:

Luís Enrique Sánchez, Antonio Santos-Olmo Parra, Eduardo Fernández-Medina and Mario Piattini

Abstract: Due to the growing dependence of information society on Information and Communication Technologies (ICTs), the need to protect information is getting more and more important for enterprises. In this context, Information Security Management Systems (ISMSs), that are very important for the stability of the information systems of enterprises, have arisen. The fact of having these systems available has become more and more vital for the evolution of Small and Medium-Sized Enterprises (SMEs). In this article, we show the methodology that we have developed for the development, implementation and maintenance of a security management system, adapted for the needs and resources available for SMEs. This approach is being directly applied to real case studies and thus, we are obtaining a constant improvement in its application.

Short Papers
Paper Nr: 5
Title:

Rethinking Self-organized Public-key Management for Mobile Ad-Hoc Networks

Authors:

Candelaria Hernández-Goya, Pino Caballero Gil and Amparo Fúster-Sabater

Abstract: In this paper, the self-organized public-key management scheme proposed for MANETs is considered in order to guarantee that all nodes play identical roles in the network. Our approach involves that the responsibility for creating, storing, distributing and revoking nodes’ public-keys is on the nodes themselves. In particular, the methods here described and evaluated are aimed at improving the process of building the local certificate repositories associated to each node in the self-organised model. In order to do it, we face the problem by combining known authentication elements such as the web-of-trust concept, together with common ideas of routing protocols, such as the MultiPoint Relay technique used in the Optimized Link State Routing protocol. Our proposal leads to a significant improvement in the efficiency of the whole model and implies a good trade-off among security, overhead and flexibility. Results of experiments show an important reduction in resource consumption while undertaking the certificate verification process associated to the authentication.

Paper Nr: 7
Title:

DLδε-OrBAC: Context based Access Control

Authors:

Narhimene Boustia and Aicha Mokhtari

Abstract: The final objective of an access control model is to provide a framework to decide if an action performed by subjects on objects is permitted or not. It is not convenient to directly specify an access control policy using concepts of subjects, objects and actions. In OrBAC (Organization Based Access Control), we can not only express static authorizations but also dynamic authorizations, depending on context. Formally, OrBAC is described in first order logic, where the context is one of the argument of predicate. We propose a new formalism based on description logic with defaults and exceptions (F. Coupey and C. Fouqueré) to describe and reason on OrBAC model. This paper is an enrichment of a previous work (N. Boustia and A. Mokhtari) with the introducing of an exception operator (?). This formalism covers not only concepts of information systems like users, objects, subjects and roles but also the context by the add of two operators of default (d) and exception (?). Notice that time complexity is still polynomial (F.M. Donini et al.).

Paper Nr: 13
Title:

Detecting Malicious Insider Threats using a Null Affinity Temporal Three Dimensional Matrix Relation

Authors:

Jonathan White, Brajendra Panda, Quassai Yassen, Khanh Nguyen and Weihan Li

Abstract: A new approach for detecting malicious access to a database system is proposed and tested in this work. The proposed method relies upon manipulating usage information from database logs into three dimensional null-related matrix clusters that reveals new information about which sets of data items should never be related during defined temporal time frames across several applications. If access is detected in these three dimensional null-related clusters, this is an indication of illicit behavior, and further security procedures should occur. In this paper, we describe the null affinity algorithm and illustrate by several examples its use for problem decomposition and access control to data items which should not be accessed together, resulting in a new and novel way to detect malicious access that has never been proposed before.

Paper Nr: 14
Title:

A Model Driven Approach for Generating Code from Security Requirements

Authors:

Óscar Sánchez, Fernando Molina, Jesús García Molina and Ambrosio Toval

Abstract: Nowadays, Information Systems are present in numerous areas and they usually contain data with special security requirements. However, these requirements do not often receive the attention that they deserve and, on many occasions, they are not considered or are only considered when the system development has finished. On the other hand, the use of model driven approaches has recently demonstrated to offer numerous benefits. This paper tries to align the use of a model driven development paradigm with the consideration of security requirements from early stages of software development (such as requirements elicitation). With this aim, a security requirements metamodel that formalizes the definition of this kind of requirements is proposed. Based on this metamodel, a Domain Specific Language (DSL) has been built which allows both the construction of requirements models with security features and the automatic generation of other software artefacts from them. An application example that illustrates the approach is also shown.

Paper Nr: 16
Title:

Managing Security Knowledge through Case based Reasoning

Authors:

Corrado Aaron Visaggio and Francesca De Rosa

Abstract: Making secure a software system is a very critical purpose, especially because it is very hard to consolidate an exhaustive body of knowledge about security risks and related countermeasures. To define a technological infrastructure for exploiting this knowledge poses many challenges. This paper introduces a system to capture, share and reuse software security knowledge within a Software Organization. The system collects knowledge in the form of misuse cases and makes use of Case Based Reasoning for implementing knowledge management processes. A reasoned analysis of the system was performed throughout a case study, in order to identify weaknesses and opportunities of improvement.

NLPCS 2009

Full Papers
Paper Nr: 5
Title:

Cross-Topic Opinion Mining for Real-Time Human-Computer Interaction

Authors:

Alexandra Balahur, Ester Boldrini, Andrés Montoyo and Patricio Martínez-Barco

Abstract: With the recent growth and expansion of the Web 2.0, there has been an important development of new textual genres, such as blogs posts or forum entries, etc. that are employed to share opinions about a topic of interest. To the best of our knowledge, previous approaches focused on corpus annotation most-ly concentrated on subjectivity versus objectivity classification and did not an-notate emotions on a fine-grained scale. The scheme we propose in this article allows for both coarse and fine-grained annotation boundaries, as well as to dis-tinguish among polarities and a large set of emotion classes. We used the anno-tated elements to train our real-time opinion mining system, which we subse-quently employ for the classification of new sentences on a closely related topic - “recycling”. We obtain promising results in all the test scenarios, proving, on the one hand, that the corpus is a valid and useful resource, and, on the other hand, that our method used for opinion mining, is adequate.

Paper Nr: 10
Title:

Computing Implicit Entities and Events with Getaruns

Authors:

Rodolfo Delmonte

Abstract: In this paper we will focus on the notion of “implicit” or lexically unexpressed linguistic elements that are nonetheless necessary for a complete semantic interpretation of a text. We referred to “entities” and “events” because the recovery of the implicit material may affect all the modules of a system for semantic processing, from the grammatically guided components to the inferential and reasoning ones. Reference to the system GETARUNS offers one possible implementation of the algorithms and procedures needed to cope with the problem and allows to deal with all the spectrum of phenomena. The paper will address at first the following three types of “implicit” entities and events: - the grammatical ones, as suggested by a linguistic theories like LFG or similar generative theories; - the semantic ones suggested in the FrameNet project, i.e. CNI, DNI, INI; - the pragmatic ones: here we will present a theory and an implementation for the recovery of implicit entities and events of (non-) standard implicatures. In particular we will show how the use of commonsense knowledge may fruitfully contribute in finding relevant implied meanings. Last Implicit Entity only touched on, though for lack of space, by the paper is the Subject of Point of View which is computed by Semantic Informational Structure and contributes the intended entity from whose point of view is expressed a given subjective statement.

Paper Nr: 11
Title:

Linguistically-Motivated Automatic Morphological Analysis for Wordnet Enrichment

Authors:

Tom Richens

Abstract: Performance of NLP systems can only be as good as the lexical resources they employ. By modelling the evolved structure of language, there is scope for morpho-semantic enrichment of these resources. A set of linguistically-informed morphological rules is formulated from the CatVar database, implemented in a Java model of WordNet and tested on suffixation and desuffixation. Overgeneration and undergeneration are measured and an approach to improving these by using multilingual resources is proposed.

Paper Nr: 12
Title:

Identifying Conflicts through eMails by using an Emotion Ontology

Authors:

Chahnez Zakaria, Olivier Curé and Kamel Smaïli

Abstract: In the logic of text classification, this paper presents an approach to detect emails conflict exchanged between colleagues, who belong to a geographically distributed enterprise. The idea is to inform a team leader of such situation, hence to help him in preventing serious disagreement between team members. This approach uses the vector space model with TF*IDF weight to represent email; and a domain ontology of relational conflicts to determine its categories. Our study also addresses the issue of building ontology, which is made up of two phases. First we conceptualize the domain by hand, then we enrich it by using the triggers model that enables to find out terms in corpora which correspond to different conflicts.

Paper Nr: 13
Title:

Automatic Alignment of Persian and English Lexical Resources: A Structural-Linguistic Approach

Authors:

Rahim Dehkharghani and Mehrnoush Shamsfard

Abstract: Cross-lingual mapping of linguistic resources such as corpora, ontologies, lexicons and thesauri is very important for developing cross-lingual (CL) applications such as machine translation, CL information retrieval and question answering. Developing mapping techniques for lexical ontologies of different languages is not only important for inter-lingual tasks but also can be implied to build lexical ontologies for a new language based on existing ones. In this paper we propose a two-phase approach for mapping a Persian lexical resource to Princeton's WordNet. In the first phase, Persian words are mapped to WordNet synsets using some heuristic improved linguistic approaches. In the second phase, the previous mappings are evaluated (accepted or rejected) according to the structural similarities of WordNet and Persian thesaurus. Although we applied it to Persian, our proposed approach, SBU methodology is language independent. As there is no lexical ontology for Persian, our approach helps in building one for this language too.

Paper Nr: 23
Title:

Objectives for a Query Language for User-activity Data

Authors:

Michael Carl and Arnt Lykke Jakobsen

Abstract: One of the aims of the Eye-to-IT project (FP6 IST 517590) is to integrate keyboard logging and eye-tracking data to study and anticipate the behaviour of human translators. This so-called User-Activity Data (UAD) would make it possible to empirically ground cognitive models and to validate hypotheses of human processing concepts in the data. In order to thoroughly ground a cognitive model of the user in empirical observation, two conditions must be met as a minimum. All UAD data must be fully synchronised so that data relate to a common construct. Secondly, data must be represented in a queryable form so that large volumes of data can be analysed electronically. Two programs have evolved in the Eye-to-IT project: TRANSLOG is designed to register and replay keyboard logging data, while GWM is a tool to record and replay eye-movement data. This paper reports on an attempt to synchronise and integrate the representations of both software components so that sequences of keyboard and eye-movement data can be retrieved and their interaction studied. The outcome of this effort would be the possibility to correlate eye- and keyboard activities of translators (the user model) with properties of the source and target texts and thus to uncover dependencies in the UAD.

Paper Nr: 24
Title:

Modelling the Tip-Of-the-Tongue State in Guided Propagation Networks

Authors:

Dominique G. Béroule and Michael Zock

Abstract: We start by presenting the Tip-Of-the-Tongue (TOT) problem and some theories accounting for it. We go then on to consider it within the framework of a neurobiological, computational model: Guided Propagation Networks (GPNs). “Selective facilitation”, a feature of the model, allows us to formulate and test various hypotheses accounting for the impeded access to a target word. We will illustrate and test them via a computer program simulating the TOT state. A comparison with other “spreading activation” models is proposed in the final discussion, as well as an evaluation of the proposed hypotheses. The abstract should summarize the contents of the paper and should contain at least 70 and at most 150 words. It should be set in 9-point font size and should be inset 1.0 cm from the right and left margins. There should be two blank (10-point) lines before and after the abstract.

Posters
Paper Nr: 2
Title:

Comparative Study on Hierarchical Phrase Structures and Linguistic Phrase Structures

Authors:

Tiejun Zhao, Yongliang Ma, Dequan Zheng and Sheng Li

Abstract: This paper proposes a framework for analysis of SMT translations output from a hierarchical phrase decoder. The tree display tool will show the translation process of the SMT model. An interactive operation tool will provide an adjusting mechanism for translation quality improvement. The work will explore automatic or semi-automatic identification and correction of some translation errors based on comparison between hierarchical phrase structures and linguistic phrase structures. Parts of the framework are implemented and primary results introduced.

Paper Nr: 7
Title:

A Corpus-based Multi-label Emotion Classification using Maximum Entropy

Authors:

Ye Wu and Fuji Ren

Abstract: Thanks to the Internet, blog posts online have emerged as a new grassroots medium, which create a huge resource of text-based emotion. Comparing to other ideal experimental settings, what we obtained by modeling blogs would be more private, honest and polemic because web logs from the World Wide Web evolve and respond more to real-world events. In this paper, our corpus consists of a collection of blog posts, which annotated as multi-label to make the classification of emotion more precise than the single-label set of basic emotions. Employing a maximum entropy classifier, our results show that the emotions can be clearly separated by the proposed method. Additionally, we show that the micro-average F1-score of multi-label detection increase when the mount of available training data further increases.

Paper Nr: 9
Title:

Managing Metadata Variability within a Hierarchy of Annotation Schemas

Authors:

Ionuţ Cristian Pistol and Dan Cristea

Abstract: The paper describes the theoretical basis of the ALPE model, a hierarchy of annotation formats used to guide the automatic computation of processing flows capable of performing complex linguistic processing tasks. The hierarchy is comprised of a core, which is a direct acyclic graph whose nodes represent XML annotation formats, and a halo which contains additional annotation formats. The core hierarchy also serves as a standardization hub for annotated documents. The focus of the paper is the description of the new additions to the model, allowing the integration and usage of non-XML formats in processing flows and new equivalence relations between XML formats.

Paper Nr: 20
Title:

Mining Linguistic and Molecular Biology Texts through Specialized Concept Formation

Authors:

Gemma Bel-Enguix, Veronica Dahl and Dolores Jiménez-López

Abstract: We present, discuss and exemplify a fully implemented specialization of the Concept Formation Cognitive model into a model of text mining that can be applied to human or molecular biology languages.

Paper Nr: 25
Title:

Declarative Parsing and Annotation of Electronic Dictionaries

Authors:

Christian Schneiker, Dietmar Seipel, Werner Wegstein and Klaus Prätor

Abstract: We present a declarative annotation toolkit based on \xml and \prolog technologies, and we apply it for annotating the Campe Dictionary to obtain an electronic version in \xml ({\sc Tei}). For parsing flat structures, we use a very compact grammar formalism called extended definite clause grammars (\edcg's), which is an extended version of the \dcg's that are well--known from the logic programming language \prolog. For accessing and transforming \xml structures, we use the \xml query and transformation language \fnquery. It turned out, that the declarative approach in \prolog is much more readable, reliable, flexible, and faster than an alternative implementation which we had made in \java and \xslt for the \textgrid community project.

Paper Nr: 26
Title:

Systematic Formulation and Computation of Subjective Spatiotemporal Knowledge Based on Mental Image Directed Semantic Theory: Toward a Formal System for Natural Intelligence

Authors:

Masao Yokota

Abstract: The author has been challenging to model natural intelligence as a formal system based on his original semantic theory “Mental Image Directed Semantic Theory (MIDST)”. As the first step for this purpose, this paper presents a brief sketch of the attempt on systematic representation and computation of subjective spatiotemporal knowledge in natural language based on certain hypotheses of mental image in human.

TCoB 2009

Full Papers
Paper Nr: 3
Title:

Context Gathering in Meetings: Business Processes Meet the Agents and the Semantic Web

Authors:

Ian Oliver, Esko Nuutila and Seppo Törmä

Abstract: Semantic Web, Space Based Computing and agent technologies provide for a more sophisticated and personalised context gathering and information sharing experience. Environments which benefit from these properties are often found in direct social situations such as meetings. In the business world such meetings are often artificially structured according to a process which does not necessarily fit the physical (and virtual) environment in which a meeting takes place. By providing simple ontologies to structure information, agents and space-based information sharing environment with reasoning capabilities we can provide for a much richer social interaction environment.

Paper Nr: 5
Title:

Context-driven Business Process Modelling

Authors:

Matthias Born, Jens Kirchner and Jörg P. Müller

Abstract: Business professionals create business process models in order to get an understanding of who is involved and which resources are needed for the execution of a business process. Such business process models are required as a basis for knowledge transfer, quality purposes, regulations, communication between internal and external collaborative partners, or documentation in general. Current process modelling approaches and tools are still very costly, error-prone, and often used in an insouciant manner. For example, changes affecting similar process artefacts need to be executed manually in each embedding model. We aim to increase the reusability of business process knowledge and propose the integration of a context-driver principle for business process modelling.

IWRT 2009

Full Papers
Paper Nr: 5
Title:

An Innovative RFID Sealing Device to Enhance the Security of the Supply Chain

Authors:

Francesco Rizzo, Marcello Barboni, Paolo Timossi, Graziano Azzalin and Marco Sironi

Abstract: One of the most vulnerable elements of the supply chain, from a security point of view, is the commercial container. Due to its nature of transportation vector for goods, it can easily be exploited as a carrier for illegal and dangerous items. In this paper, while acknowledging the need for a global and comprehensive approach to supply chain security, we focus on the security of commercial containers. We discuss the technologies presently used in the field of commercial container security. In particular, we give an overview of the reasearch carried out by the Joint Research Centre (JRC) of the European Commission in the field of supply chain security. We then discuss in-depth the active RFID-based sealing systems designed at the JRC, giving a detailed technological description and presenting the experimental results of the laboratory tests.

Paper Nr: 6
Title:

Trust Framework for RFID Tracking in Supply Chain Management

Authors:

Manmeet Mahinderjit Singh and Xue Li

Abstract: RFID tracking systems are in an open system environment, where different organizations have different business workflows and operate on different standards and protocols. RFID tracking to be effective, it is imperative for RFID tracking systems to trust each other and be collaborative. However, RFID tracking systems operating in the open system environment are constantly evolving and hence, the related trust and the collaborations need to be dynamic to changes. This paper presents a seven-layer RFID trust framework to promote the resolution of merging with both social and technology traits in enhancing security, privacy and integrity of global RFID tracking systems. An example of integration of our trust framework with supply-chain management applications and trust evaluation is also presented.

Paper Nr: 7
Title:

Agent/Space-based Computing and RF Memory Tag Interaction

Authors:

Joni Jantunen, Ian Oliver, Sergey Boldyrev and Jukka Honkola

Abstract: As devices become more ubiquitous and information more pervasive technologies such as SemanticWeb and Space-Based computing come to the fore as a platform for user-interaction with their environment. Integrating technologies such as advanced RF memory Tag with mobile devices and combining this with computation platforms such as those supporting Semantic Web principles provides for the user innovative and novel applications.

Paper Nr: 9
Title:

RFId and Returnable Transport Items Management: An Activity-based Model to Assess the Costs and Benefits in the Fruit and Vegetable Supply Chain

Authors:

Giovanni Miragliotta, Alessandro Perego and Angela Tumino

Abstract: Radio Frequency Identification (RFId) technology promises to enable substantial benefits in Returnable Transport Items management. Lured by this opportunity, some companies (e.g. Metro Group) are carrying out pilot projects, but the difficulties in quantifying the costs and benefits stemming from such applications still prevent many companies from using this technology. This paper describes an analytical model to assess the profitability of such investments, focusing on the fruit and vegetable supply chain.

Paper Nr: 11
Title:

A Traceability Service to Facilitate RFID Adoption in the Retail Supply Chain

Authors:

Gabriel Hermosillo, Julien Ellart, Lionel Seinturier and Laurence Duchien

Abstract: Nowadays, companies are suffering changes in the way they deal with their inventories and their whole supply chain management. New technologies are emerging to help them adapt to the changes and keep a competitive status, but the adoption of such technologies is not always easy. Even though a lot of research has been done for RFID, there are still some areas that are being left aside, like the traceability aspect, which is one of the most important concerns in the retail supply chain. We propose a service named TRASER (TRAceability SErvice for the Retail supply chain) that will help the companies adopt the new technologies into their existing environments, dealing with persistence and traceability, and allowing the users to manage their operation according to their business rules, workflows and historical data.

Paper Nr: 12
Title:

Discovery Services Interconnection

Authors:

Jérôme Le Moulec, Jacques Madelaine and Ivan Bedini

Abstract: This paper presents and discusses various solutions for the cooperation of several components implementing Discovery Services (DS) in the EPCglobal architecture. This architecture aims to collect and store events involving objects tagged with a Electronic Product Code (EPC) that can be accessed using the RFID technology. Each event is stored in a repository with a standardized interface specification: the EPC Information Services. The DS components are used by the application layer in order to retrieve which repositories store events about a given code. If this is not a problem for a centralized network, it is still an open question for a decentralized architecture. Security and access controls concerns are also taken into account.

Paper Nr: 16
Title:

Optimum Frame-length Configuration in Passive RFID Systems Installations

Authors:

M. V. Bueno-Delgado, J. Vales-Alonso, E. Egea-López and J. García-Haro

Abstract: Anti-collision mechanisms in RFID, including current standards, are variations of Aloha and Frame Slotted Aloha (FSA). The identification process starts when the reader announces the length of a frame (in number of slots). Tags receive the information and randomly choose a slot in that cycle to transmit their identifier number. The best performance of FSA always requires working with the optimum frame-length in each cycle. However, it is not a parameter easy to adjust in real RFID readers. In this work a markovian analysis is proposed to find the optimum value of frame-length for the current readers of the market. Besides, to validate and contrast the analytical results, a real passive RFID system has been used to get experimental results: the development kit Alien 8800. The experimental results match the analysis predictions.

Paper Nr: 17
Title:

RFID-based Semantic-enhanced Ubiquitous Decision Support System for Healthcare

Authors:

Michele Ruta, Floriano Scioscia, Tommaso Di Noia and Eugenio Di Sciascio

Abstract: We present an innovative Decision Support System for healthcare applications, based on a semantic enhancement of RFID standard protocol. Semantically annotated descriptions of both drugs and patient’s case history stored in proper RFID tags are used to help doctors in providing the correct therapy. The proposed system allows to discover possible inconsistencies in a therapy suggesting alternative treatments.

Short Papers
Paper Nr: 1
Title:

Real-Time Traceability and Intelligent Product Management in the Supply Chain

Authors:

Taxiarchis Belis, Stelios Tsafarakis and Anastasios Doulamis

Abstract: In this paper we present an architecture that provides enterprises with the opportunity to apply a low cost traceability system which allows for the complete transparency of products in the supply chain. The proposed traceability system uses the RSS 2.0 format, while a product classification ontology is developed in order to enable intelligent product management. The above technologies are incorporated into an RFID architecture. The deployment of RFID technology in the supply chain is expected to increase significantly the amount of product data, raising the need of adoption of an effective ontology-based model for intelligent product information management.

Paper Nr: 10
Title:

Automatic Monitoring of Logistics Processes using Distributed RFID based Event Data

Authors:

Kerstin Werner and Alexander Schill

Abstract: Decreasing sizes and a static decline in production costs are fostering the use of RFID tags and sensors in cross-company logistics networks. Additionally, the EPCIS specification comprises interface standards for capturing and querying RFID based event data and storing it in a standardized data format. This contribution examines the potential of the given technological means for the automatic monitoring of complex inter-organizational logistics processes. We identify requirements for the monitoring of individual quality objectives using distributed event data and describe the architecture of a monitoring system addressing them. Furthermore, we argue that such a system can be nearly seamlessly integrated into existing EPCglobal compliant RFID infrastructures.

Paper Nr: 13
Title:

A Ubiquitous Mobile Telemedicine System for Elderly using RFID

Authors:

M. W. Raad

Abstract: The impact of Radio Frequency Identification (RFID) technology as a ubiquitous component in the information age is expanding with new and innovative applications in retail, government, manufacturing and healthcare. In addition to that, the percentage of the elderly population grew rapidly in recent years, so the tremendous life and health care needs became an issue of vital importance in an aging society. In a health care context, the use of RFID technology can be employed for cutting down health care costs. The main goal of the presented research in this paper is to develop a cost-effective user-friendly telemedicine system utilizing RFID technology to serve the elderly and disabled people in the community. The research also aims at establishing a continuous communication link between the elderly and caregivers and allows physicians to offer help when needed.

Paper Nr: 14
Title:

RFId-enabled Lateral Trans-shipments in the Fashion & Apparel Supply Chain

Authors:

Riccardo Mogre, Alessandro Perego and Angela Tumino

Abstract: There is a growing attention towards the application of Radio Frequency Identification (RFId) technology in the Fashion & Apparel supply chain. This paper describes a Lateral Trans-shipment replenishment policy enabled by item-level RFId tagging. In fact, in order to adopt a Lateral Trans-shipment policy, a firm should have an accurate visibility of both the inventory level in its warehouses and in each store. Such visibility can be provided by RFId technology. An analytical model has been developed to evaluate the convenience of applying a Lateral Trans-shipment policy, and the main managerial implications are discussed.

Paper Nr: 15
Title:

Man Machine Interface in RFID Middleware: DEPCAS User Interface

Authors:

Carlos Cerrada, Ismael Abad, José Antonio Cerrada and Ruben Heradio

Abstract: Between the RFID middleware main components the editing and management tools support the way to configure and deploy the RFID solutions. The most important functions of Man Machine Interface for RFID middleware component are to allow access to the monitoring, the management and the control included in the middleware. Among the key features that are necessary for this component we can mention the flexibility, adaptability, and scalability to the system middleware that implements it. We present a study on the interfaces in some of the existing RFID middleware. We also include the prototype of the user interface made for the RFID middleware architecture called DEPCAS (Data EPC Acquisition System). This interface called GUV (Graphical User Viewer) was developed through language EFL (Exemplar Flexibilization Lan-guage).

HRIS 2009

Full Papers
Paper Nr: 4
Title:

Appropriation of Technologies What Role for the Organization?

Authors:

Ewan Oiry and Roxana Ologeanu-Taddeï

Abstract: The concept of appropriation is frequently used in publications concerning uses of technologies. Firstly elaborated to analyze difficulties in diffusion of innovation, it was usually linked with characteristics of organization where those innovations were implemented. This concept knows great improvements in recent years. On one side, it is now common practice to link appropriation and characteristics of technology used. This “technology side” of appropriation is especially well described by the “Adaptative Structuration Theory” [6]. On the other side, it is common practice too to link appropriation with characteristics of users (interactions in groups, etc.). This “user side” of appropriation can be treated with the “Theory of practice” [20]. But, those frameworks appear not able to take really into account the “organization side” of appropriation. By presenting three case studies, this paper shows that it is necessary to reintroduce this “side” to have a complete analysis of appropriation.

Paper Nr: 5
Title:

Implementing English Language e-HRM Systems: Effects on User Acceptance and System Use in Foreign Subsidiaries

Authors:

Jukka-Pekka Heikkilä and Adam Smale

Abstract: It is suggested that the effective implementation of electronic human resource management (e-HRM) technology will transform the management of human resources in firms by allowing HR departments to transfer data management and process transaction responsibilities to employees, managers and third-party service providers. In global setting, e-HRM systems are tasked with facilitating this whilst accommodating cultural differences, one of which being language. In the light of scant empirical research on language in international HRM, this study analyzes and presents, shortly, the effects of using English language e-HRM systems on acceptance and use in foreign subsidiaries. The study’s findings are based on qualitative data collected via 18 in-depth interviews with HR managers from two European MNCs.

Paper Nr: 6
Title:

An Integrated IT-Architecture for Talent Management and Recruitment

Authors:

Christian Maier, Sven Laumer and Andreas Eckhardt

Abstract: Already in 2001 Donahue [7] argued in her Harvard Business Review article that it is time to get serious about talent management and in 2005 Hustad and Munkvold [11] presented a case study of IT-supported competence management. Based on Lee’s suggested architecture for an holistic e-recruiting system [16] and the general research on e-HRM the aim of this paper is to suggest an integrated IT-Architecture for talent management and recruitment following the design science guidelines proposed by Alan Hevner [10] to support both the recruiting and the talent as well as competence management activities of a company.

Paper Nr: 7
Title:

e-Recruitment: New Practices, New Issues An Exploratory Study

Authors:

Aurélie Girard and Bernard Fallery

Abstract: The Internet as already impacts the recruitment process and the development of Web 2.0 offers recruiters new perspectives. Are web 2.0 practices revealing new e-recruitment strategies? We connect first the RBV and the SNT respectively with Web 1.0 and Web 2.0. Then, we present the results from an exploratory study conducted among recruiters in software and computing services companies. It appears that the use of Web 1.0 is generalized but that it is becoming insufficient. Web 2.0 is used by firms to develop employer branding and a reputation and to create new relationships with potential applicants.

Paper Nr: 8
Title:

Virtual HRM: A Case of e-Recruitment

Authors:

Anna B. Holm

Abstract: Although electronic recruitment is a widespread managerial practice of acquiring personnel, it still remains unclear exactly which organisational processes fall under its existing definitions. The research presented in this paper attempts to answer the fundamental question whether e-recruitment should be understood as means of automating the process of recruitment, or rather be treated as a more complex organisational concept. To clarify this issue the paper discusses the phenomenon from the open-system organisational perspective of virtual organising. The paper draws on the results of the qualitative exploratory study conducted in Denmark in 2008-2009. It concludes that as organisational concept e-recruitment is not only about application of technology to recruitment tasks. The process of e-recruiting spans organisational boundaries and directed to and affected by external environment of organizations.

Paper Nr: 9
Title:

Generation Y & Team Creativity: The Strategic Role of e-HRM Architecture

Authors:

Barbara Imperatori and Rita Bissola

Abstract: Nowadays HR Departments intend to be a ‘business partner’; this means sustaining the critical sources of competitive advantages, such as knowledge creation, creativity processes and innovation. In order to attract, retain and develop the ‘new creative and always connected’ talents of the Y Generation, to design a new e-HRM architecture is a strategic issue. The present article, starting from a wide empirical experiment with a sample of 1078 students, provides valuable results about the relationship between team and individual creativity and suggests some useful indications for e-HRM, especially for the new and not yet well known Gen-Yers. Multiple measures of both individual and team creativity were considered. Data confirm that individual creativity is positively related with group creativity but it does not fully explain it. Interpersonal dynamics intervene. This evidence is the base for defining some guidelines which are useful in the design of strategic e-HR architecture, in supporting the new Y-Gen staffing, training and development as well as team design and interpersonal dynamics, in order to really enhance organizational creativity and competitive advantage sustainability.

Paper Nr: 10
Title:

Identifying the “Best” Human Resource Management Practices in India: A Case Study Approach

Authors:

Pramila Rao

Abstract: The primary purpose of this research paper is to detail the human resource management (HRM) practices of three Indian companies identified by Mercer, the global consulting firm, as the best companies to work for in India in 2008. The human resource management practices that were researched were recruitment & selection, training & development, performance appraisal, and compensation & benefits. India is becoming a major player in the Asian economy and US multinationals are significantly increasing their presence in India. This qualitative paper uses a design strategy of purposeful sampling which provides rich information about the distinctive companies sampled. The primary method of data collection was interviews that were tape-recorded, which was followed by transcription of the data. The data analysis strategy involves studying each unique case, with cross-analysis following the individual case studies. These “best” companies demonstrated a very superior work culture, an elaborate recruitment system, rigorous training and development, performance-based work culture and unique benefits. These Indian companies definitely walked that extra mile to make employees feel motivated, engaged, and committed. This study provides successful HRM practices that multinationals can adopt for their business units in India. Further, this paper highlights the HRM practices that are integrated with information technology providing an idea of what human resource management information systems (HRIS) are successful also. This research proposes a HRM model identifying best practices of this research that can be emulated by global HRM practitioners. The paper advances theory by integrating resource-based theory and HRM practices of best companies to augment the understanding of strategic human resource management better.

Paper Nr: 11
Title:

Electronic HRM: From Implementation to Value Creation

Authors:

Yvonne Loijen and Tanya Bondarouk

Abstract: The paper presents results of the quantitative study into enablers and value creation of e-HRM systems. The findings supported by the analysis of 210 questionnaires, have revealed that the most significant enabler of e-HRM implementation is HRM system strength, while characteristics of the IT functionality also played an important role. The main result of the e-HRM usage was observed as effectiveness of HR administrative processes, but not Re-structuring of the HR function as usually expected from the introduction of e-HRM in organizations.

Paper Nr: 12
Title:

New e-HRM Typology: From Broadcasting towards Supply Chain Support

Authors:

Tanya Bondarouk and Marco Maatman

Abstract: We argue that an existing classification of e-HRM, known as a division between transactional, relational, and transformational (based on a canonical work of Lepak and Snell [11]), doesn’t meet all expectations of the multidisciplinary character of e-HRM and Human Resource Information Systems. Built mainly on the ideas from the HRM field, it lacks attention to such properties as coordination of information and its exchange, capturing knowledge domain, and communication languages. We propose to broaden existing typology by inclusion insights from the field of information technologies. In the suggested typology, e-HRM / HRIS is classified along ontological, coordination, user-interface, adaptation, and HR function impact blocks; allowing for distinguishing five types of e-HRM: static and customized informational, pooled and sequential transactional, and supply chain delivery support. We see several advantages in using this typology for the practitioners, the most important is that it helps to evaluate the stage of e-HRM / HRIS development and foresee horizons for improvements.

Paper Nr: 14
Title:

Convergence or Divergence in Global e-HRM? General Foundations and Basic Propositions

Authors:

Stefan Strohmeier

Abstract: Though increasingly adopted in different countries our current knowledge concerning the global adoption of electronic Human Resource Management (e-HRM) is limited at present. In particular, it is unclear whether e-HRM is a universal management practice or whether there are regional differences in the organizational adoption of e-HRM. The present paper therefore aims at an initial examination of this question, by a) elaborating the general foundations of global e-HRM and b) developing some basic propositions on global e-HRM. Discussing the foundations uncovers an ambitious and voluminous research task. Based on an analysis of basic institutional and cultural influences major results are the digital divide, contextual openness, and functional congruence of e-HRM.

Paper Nr: 16
Title:

Usability Study on Dutch e-Recruiting Services: Limitations and Possibilities from the Applicants' Perspective

Authors:

Chris Jansen, Elfi Ettinger and Celeste Wilderom

Abstract: In this research, applicants' perceived expectations, limitations and service improvements concerning two Dutch e-Recruiting services (monsterboard.nl and vacant.nl) are investigated. Data from interviews and videotapes have been analysed. The main perceived limitation in regard to e-Recruiting sites is the lack of personal communication and contact. The majority of usability problems stem from layout issues (monsterboard.nl), search functionality (vacant.nl) and lack of information (both). Better search and matching functions and the inclusion of personal elements into e-Recruiting service offerings were the foremost desires of users.

Paper Nr: 17
Title:

Knowledge Management in a Multinational Context: Aligning Nature of Knowledge and Technology

Authors:

Cataldo Dino Ruta and Ubaldo Macchitella

Abstract: Aim of this paper is to show the importance of understanding the nature of both the technology and knowledge when promoting knowledge sharing through knowledge management (KM) portals. This paper investigate knowledge sharing and the “fit” between the nature of knowledge to be shared and the nature of the technological tools that are used. Technology intended as technical instrument could result in an empty box, and knowledge management initiatives could not be effective and lead to a sustainable competitive advantage. By means of an in-depth case study of a major consulting firm, the study discusses and answer the research question. Results show that knowledge areas with high level of codifiability can be effectively shared by using low collaborativity and low multimodality tools. Knowledge areas with a high level of epistemic complexity can be effectively shared by using high collaborativity and high multimodality tools. Knowledge areas with a high level of task dependence can be effectively shared by using low collaborativity and intermediate multimodality tools.

Paper Nr: 18
Title:

How Can HRM Support Business Intelligence Systems?

Authors:

Manelle Guechtouli

Abstract: This paper deals with HR Management issues in a BI system. In a more precise way, we got interested in the motivational aspects of a BI system or how to increase employees’ motivation in such systems. The question asked in this paper is: which HR functions could play a role in enhancing BI activities? After a literature review we chose to focus on Hannon’s [10] model in order to answer this question. Results are discussed and illustrated through a field study in a big technological firm.

AT4WS 2009

Full Papers
Paper Nr: 6
Title:

Service Composition Based on Functional and Non-Functional Descriptions in SCA

Authors:

Djamel Belaïd, Hamid Mukhtar and Alain Ozanne

Abstract: Service Oriented Computing (SOC) has gained maturity and there have been various specifications and frameworks for realization of SOC. One such specification is the Service Component Architecture (SCA), which defines applications as assembly of heterogeneous components. However, such assembly is defined once and remains static for fixed components throughout the application life-cycle. To address this problem, we propose an approach for dynamic selection of components in SCA, based on functional semanticmatching and non-functional strategic matching using policy descriptions in SCA. The architecture of our initial system is also discussed.

Short Papers
Paper Nr: 2
Title:

Planning Process Instances with Web Services

Authors:

Charles Petrie

Abstract: Planning is an important approach to developing complex applications composed of web services, based upon semantic annotations of these services. Despite numerous publications in recent years, the problems considered in the literature typically do not require planning as it has been well-defined in computer science. This could lead to confusion about which technologies are being designated, and raises the question of what whether planning is an appropriate technology for services. We describe the essential features of planning technology and note its advantages, which include the dynamic synthesis of processes and the lack of need to verify the correctness of the message exchange. We show that planning technology really is necessary by giving an example of web service composition that cannot be solved with simpler technologies as could previously published examples. We describe the basics of adapting planning to web service composition. We restrict its use to process instance synthesis in order to simplify exploration of some fundamental issues. A major issue is that web services are usually incompletely modeled. We illustrate this with a second example. We show some additional semantic annotations of web services can be used to solve the problems similar to the example when used in conjunction with re-planning.

Paper Nr: 5
Title:

Generating OWL-S Families by Utilizing Business Process Definitions and Feature Models

Authors:

Umut Orhan and Ali H. Dogru

Abstract: This research introduces automated transition from domain models and process specifications to semantic web service descriptions in the form of service ontologies. Also, automated verification and correction of domain models are enabled. The proposed approach is based on Feature-Oriented Domain Analysis (FODA), Semantic Web technologies, and ebXML Business Process Specification Schema (ebBP). This approach is proposed to address the needs for achieving productivity gains, maintainability and better alignment of business requirements with technical capabilities in the engineering of service oriented applications and systems.

AER 2009

Full Papers
Paper Nr: 1
Title:

Reporting Repository: Using Standard Office Software to Manage Semantic Multidimensional Data Models

Authors:

Christian Kurze and Peter Gluchowski

Abstract: Implementing a business intelligence solution requires the appropriate integration of numerous tasks and components. The so-called Data Track requires three main steps: Dimensional Modeling, Physical Design, and ETL Design & Development. This paper focuses on the Dimensional Modeling step and provides a solution for managing multidimensional data models with standard office tools, namely Microsoft Visio and Access. A real-world project in the telecommunications industry provides business requirements and is used in order to prove the solution. We outline the lessons learned and give hints for further development.

Paper Nr: 3
Title:

Towards an Enterprise Repository Framework

Authors:

Dina Jacobs, Paula Kotzé and Alta van der Merwe

Abstract: The enterprise architect is dependent on the functionality of the enterprise repository to define and maintain the enterprise architecture. Two of the specific functionalities are typical ‘warehouse’ related functionalities. The one requirement is to integrate multiple business process reference models as source models, similar to the reuse of data from different sources in a data warehouse environment. The second requirement is the flexible visualization of business process models that has a ‘slice-and-dice’ flavour as used in the data warehouse domain. By means of analogical reasoning, our research investigates using the theoretical foundation of the data warehouse domain to contribute to the definition of an enterprise repository framework. Based on the similarities found, an enterprise repository framework is derived.

Paper Nr: 5
Title:

An Architecture of Ontology-aware Metamodelling Platforms for Advanced Enterprise Repositories

Authors:

Srdjan Zivkovic, Harald Kühn and Marion Murzek

Abstract: Enterprise repositories have become core information assets of today’s enterprises. They store and manage models of different aspects of an enterprise, such as business strategy, business processes, organizational structures, and IT infrastructure in an integrative way based on a bundle of domain-specific modelling languages. To produce such models for enterprise repository and profit from their usage, number of mechanisms such as model querying, validation, simulation, model transformation, versioning and traceability are needed. Metamodelling platforms provide flexible and extensible environment for realizing such advanced enterprise repositories, where various mechanisms working on repository may be integrated, thus providing a superior modelling solution. Going one step further, the question is, can such platforms profit from semantic technologies? How architecture of ontology-aware metamodelling platforms can be designed? In this paper, we give an in-depth view of such plat-form architecture by discussing its main building blocks and their dependencies.

Paper Nr: 7
Title:

Towards the Use of Formal Ontologies in Enterprise Architecture Framework Repositories

Authors:

Aurona Gerber and Alta van der Merwe

Abstract: An enterprise architecture (EA) framework is a conceptual tool that assists organizations and businesses with the understanding of their own structure and the way they work. Normally an enterprise architecture framework takes the form of a comprehensive set of cohesive models or enterprise architectures that describe the structure and the functions of an enterprise. Generically, an architecture model is the description of the set of components and the relationships between them. The central idea of all architectures is to represent, or model (in the abstract) an orderly arrangement of the components that make up the system under question and the relationships between these components. It is clear within this context that the models within an enterprise architecture framework are complex. However, recent advances in ontologies and ontology technologies may provide the means to assist architects with the management of this complexity. In this position paper we want to argue for the integration of formal ontologies and ontology technologies as tools into enterprise architecture frameworks. Ontologies allow for the construction of complex conceptual models, but more signicant, ontologies can assist an architect by depicting all the consequences of her model, allowing for more precise and complete artifacts within enterprise architecture framework repositories, and because these models use standardized languages, they will promote integration and interoperability with and within these repositories.

Posters
Paper Nr: 4
Title:

Separating Conceptual and Visual Aspects in Meta-Modelling

Authors:

Simon Nikles and Simon Brander

Abstract: ATHENE is a modelling environment which allows for creating meta-models and models based on these, aimed at generating an ontology of the modelled content. This work deals with the question on whether and how to separate the conceptual part of a (meta-)model from its graphical representation. We provide an overview on existing ontology- and meta-modelling approaches compared to ATHENE and develop a conceptual basis for enhancing the meta2 level of the tool.

Paper Nr: 6
Title:

Towards a Linguistic Analysis and Representation of Business Rules

Authors:

Pieter Joubert

Abstract: The paper explores linguistics as a basis for analyzing and representing business rules. The actual business rules are seen as the text to be analyzed. A very simple, straightforward methodology is explained and illustrated with part of a bigger case study. In essence every business rule is analyzed to determine nouns, verbs, adjectives, adverbs, prepositions and other lexical categories. The business rules are then structured into sentences which basically have the structure of conjunctions, subject, predicate, and direct object/adjunct.

MDMD 2009

Full Papers
Paper Nr: 3
Title:

Providing Context-Aware Information in a Mobile Geographical Information System

Authors:

Anderson Resende Lamas, Jugurta Lisboa Filho, Ronoel Matos de Almeida Botelho Júnior and Alcione de Paiva Oliveira

Abstract: This paper presents the development of a Mobile Geographical Information System (Mobile GIS) capable of managing context information. This system was established from an architecture based on the specification of an ontology-based context model and a set of Web Services to access information remotely stored in a geographic database. This mechanism allows Mobile GIS users to receive personalized information in their mobile devices, combining the information on their profiles with the display of geo-spatial data.

Paper Nr: 5
Title:

A Mechanism for Managing and Distributing Information and Queries in a Smart Space Environment

Authors:

Sergey Boldyrev, Ian Oliver and Jukka Honkola

Abstract: Distributed information management, especially when related to mobile devices with their particular connectivity and computing needs is complex. Based on notions of stability of information respository and factors such as computation resources and connectivity we can optimise the distribution and querying of said information across multiple mobile and other distributed devices.

Paper Nr: 6
Title:

A Framework for Adaptive RDF Graph Replication for Mobile SemanticWeb Applications

Authors:

Bernhard Schandl and Stefan Zander

Abstract: An increasing number of applications are based on Semantic Web technologies and the amount of information available on the Web in the form of RDF is continuously growing. The adaption of the Semantic Web for Personal Information Management and the increasing desire for mobility is often accompanied by situations where no network connectivity is available and hence access to remote data is limited. Such situations could be obviated when mobile devices are able to operate on offline data replicas and synchronize changes when connectivity is re-established. In this paper we present our ongoing work in developing a framework allowing for adaptive RDF graph replication and synchronization on mobile devices. We propose to interpose components that analyze various information sources of semantic applications (including ontologies, queries, and expressed user interest) and use them for selecting parts of RDF data bases, which are then made available offline using a proxy SPARQL endpoint on a mobile device. Thus, we provide access to Semantic Web data without the need for permanent network connectivity.

FTMDD 2009

Full Papers
Paper Nr: 3
Title:

Towards combining Model Matchers for Transformation Development

Authors:

Konrad Voigt

Abstract: The theory of model transformation has been studied extensively during the last decade and is now well understood. Several transformation languages have been developed and implemented. Recently, model matching has been proposed to offer support for transformation development. The task of model matching aims at finding semantic correspondences between model elements, thus facilitating semi-automatic mapping generation. However, current model matching approaches mostly concentrate on label-based model similarity and are isolated. Further, they show deficits with respect to quality, performance and language independence. We tackle these issues by proposing a novel approach using a combination of matchers in a common framework. Thereby, schema matching techniques are adapted and extended to suit our needs. Complementing our approach, we propose model specific matchers addressing new aspects of similarity. Our configurable framework allows an interpretation of combined matching results, thus increasing the number and quality of mappings found.

Paper Nr: 7
Title:

An Attribute-based Approach to the Analysis of Model Characteristics

Authors:

Christian Saad, Florian Lautenbacher and Bernhard Bauer

Abstract: Modeling languages provide a powerful technique for describing domain specific concepts and their relationships. Since the syntactical structure is determined by a meta-model, expressions can be defined on the meta layer and evaluated for arbitrary instances. Currently, meta-modeling standards lack an easy way for defining rules capable of dynamically examining the behavior of a model. In this paper we discuss a new approach that links meta-modeling with wellunderstood methods from the field of compiler construction in order to express semantic constraints and perform data-flow calculations on model instances. We show how our approach can be used to analyze and validate business processes and specify further use cases like e.g. the calculation of model metrics.

Paper Nr: 8
Title:

APRiL: A DSL for Payroll Reporting

Authors:

Xiaorui Zhang, Yun Lin and Øystein Haugen

Abstract: The highly diverse payroll reporting structures within and between organizations pose challenges to enterprise information system vendors. Producing the database scripts for customized configuration of payroll reporting has been traditionally a costly manual process. We show how this process can be automated and made less error-prone and more user-friendly by introducing a combination of Model-Driven Development (MDD) and a Domain Specific Language (DSL). This paper addresses the development of Agresso Payroll Reporting Language (APRiL), a DSL to describe payroll structures and hierarchies. The language is supported by tailored tools created with open source technologies on Eclipse. We look at the potential implications of our approach on the development of payroll reporting system, along with its advantages and challenges. We also explore possible improvements and application of our approach in other areas of enterprise information systems.

Short Papers
Paper Nr: 2
Title:

Model-Driven Tool Integration with ModelBus

Authors:

Christian Hein, Tom Ritter and Michael Wagner

Abstract: ModelBus is a tool integration technology which is built upon Web Services and follows a SOA approach. ModelBus facilitates the orchestration of modeling services which represent particular functionality of tools acting on models. This demonstration paper summarizes the key concepts of the ModelBus technology and presents an outline of a scenario where ModelBus has been applied in an industrial context.

Paper Nr: 4
Title:

An Approach of Ontology Oriented SPEM Models Validation

Authors:

Miroslav Líška

Abstract: SPEM 2.0 is used to define software and systems development processes and their components. It separates reusable development knowledge base from its application in processes. In this work we present a knowledge oriented approach that supports design of SPEM models with respect of SPEM architecture. The main idea is based on creation three ontologies that are a SPEM ontology, a development knowledge ontology and an ontology of development knowledge application. Ontologies are generated with XSL transformations from UML models and also from SPEM CMOF specification and they are imported into the Protégé for reasoning purposes. At the conclusion paper discusses a future work to increase contributions of this approach.

Paper Nr: 6
Title:

Towards a Generic Traceability Framework for Model-driven Software Engineering

Authors:

Birgit Grammel

Abstract: With the inception of Model-Driven Software Engineering (MDSD) the need for traceability is raised to understand the complexity of model transformations and overall to improve the quality of MDSD. Using the advantage of generating traceability information automatically inMDSD, eases the problem of creating and maintaining trace links, which is a labor intensive task, when done manually. Yet, there is still a wide range of open challenges in existing traceability solutions and a need to consolidate traceability domain knowledge. This paper proposes a generic framework for augmenting arbitrary model transformation approaches with a traceability mechanism. Essentially, this augmentation is based on a domain-specific language for traceability providing the formalization on integration conditions needed for implementing traceability. The paper is of positional nature and outlines work currently in progress.

Paper Nr: 10
Title:

Trade-offs for Model Inconsistency Resolution

Authors:

Anne Keller, Hans Schippers and Serge Demeyer

Abstract: With the advent of model-driven software development, models gain importance in all steps of the software development process. This makes the problem of consistency between models an important challenge. Models in large projects typically contain many inconsistencies, where each of them can be resolved in multiple ways with different effects on the system, thus making the decision on a resolution strategy a hard one. In this paper, we present metrics that assess the impact of a resolution operation on the model and the computational cost of a resolution operation. We propose to use these metrics to compare different resolution strategies.

Paper Nr: 11
Title:

Reflecting on Higher Order Transformations: Challenges and Opportunities

Authors:

Olaf Muliawan and Dirk Janssens

Abstract: The area of Model Driven Engineering focuses on the transformation of models into source code. However, large projects require complex transformation patterns which are difficult to implement and maintain. New language features could represent often used transformation patterns. However, extending a transformation language is not the preferred solution to keep the language concise. Therefore we introduce the notion of Higher Order Transformations that manipulate these new language features and transform them back in the original language. In this paper we will explain the challenges of using Higher Order Transformations and the opportunities these techniques provide.

OET 2009

Full Papers
Paper Nr: 6
Title:

Exploiting Tagging in Ontology-based e-Learning

Authors:

Nicola Capuano, Angelo Gaeta, Francesco Orciuoli and Stefano Paolozzi

Abstract: The ontologies are used to state the meaning of the terms used in data produced, shared and consumed within the context of Semantic Web applications. The folksonomies instead are an emergent phenomenon of the Social Web and represent the result of free tagging of information and objects in a social environment. Both ontologies and folksonomies are considered useful mechanisms to manage the information and are pretty always exploited, independently, in several areas of interest in order to cope with different problems related to searching, filtering, categorization and organization of content within some applications for e-commerce, e-learning, e-science, etc. In our opinion the two mechanisms are not in opposition but could be synergically used. In this paper we propose an approach based on the convergence between ontologies and folksonomies in order to improve personalised e-learning processes.

Paper Nr: 7
Title:

Probabilistic Models for Semantic Representation

Authors:

Francesco Colace, Massimo De Santo and Paolo Napoletano

Abstract: In this work we present the main ideas behind in Search of Semantics project which aims to provide tools and methods for revealing semantics of human linguistic action. Different part of semantics can be conveyed by a document or any kind of linguistic action: the first one mostly related to the structure of words and concepts relations (light semantics) and the second one related to relations between concepts, perceptions and actions deep semantics. As a consequence we argue that semantic representation can emerge through the interaction of both. This research project aims at investigating how those different parts of semantics and their mutual interaction, can be modeled through probabilistic models of language and through probabilistic models of human behaviors. Finally a real environment, a web search engine, is presented and discussed in order to show how some part of this project, light semantics, has been addressed.

Paper Nr: 8
Title:

From Classic User Modeling to Scrutable User Modeling

Authors:

Giorgio Gianforme, Sergio Miranda, Francesco Orciuoli and Stefano Paolozzi

Abstract: User Modeling still represents a key component for a large number of personalization systems. Maintaining a model for each user, a system can successfully personalize its contents and use available resources accordingly. Ontologies, as a shared conceptualization of a particular domain, can be suitably exploited also in this area. In this paper we explain some concepts about user modeling, particularly focusing on scrutability and its importance in ontology based user modeling systems.

Paper Nr: 9
Title:

Ontology Engineering: Co-evolution of Complex Networks with Ontologies

Authors:

Francesca Arcelli Fontana, Ferrante Raffaele Formato and Remo Pareschi

Abstract: Our assumption here is that the relationships between networks of concepts (ontologies) and people networks (web communities) is reciprocal and dynamic. ontologies identify communities and communities through practice define ontologies. Ontologies describe complex domains and therefore are difficult to create manually. Our investigation aims at building tools and methodologies to drive the process of ontology building. In particular we define a model by which ontologies evolve through Web community extraction. In this paper, we observe that tags in Web 2.0 are mathematical objects called clouds and studied in [11]. And we introduce NetMerge, an algorithm for transforming an ontology into a complex network.

Paper Nr: 10
Title:

Building Ontologies from Annotated Image Databases

Authors:

Angelo Chianese, Vincenzo Moscato, Antonio Penta and Antonio Picariello

Abstract: Defining and building Ontologies within the multimedia domain still remain a challenging task, due to the complexity of multimedia data and the related associated knowledge. In this paper, we investigate automatic construction of ontologies using the Flickr image databases, that contains images, tags, keywords and sometimes useful annotation describing both the content of an image and personal interesting information describing the scene. We then describe an example of automatic ontology construction in a specific semantic domain.

Paper Nr: 11
Title:

Information Driven Modeling: The Integration of Enterprise Ontology and Enterprise Architecture for the Mature BI

Authors:

Jorge Cordeiro Duarte and Mamede Lima-Marques

Abstract: There is a huge interest in the area of engineering ontologies for a very wide range of applications. In modern enterprises and their business process management and decision making, modeling and Business Intelligence (BI) play a significant role. Information Technology (IT) contribution to business strategies and decisions has been questioned. The problem is not with IT, but with business complexity in modern times. The challenge of enterprise intelligence is a mission for the whole enterprise, and depends on intensive collaboration among organizational units. Enterprise BI requires new approaches, methods and tools focused on integration. BI must be part of Enterprise Architecture (EA). This paper offers a new insight on Business Intelligence. It proposes an extension of Archimate methodology in order to integrate EA and BI efforts with enterprise ontology to provide mature enterprise decisions.