ICEIS 2017 Abstracts


Conference

ICEIS 2017

Area 1 - Databases and Information Systems Integration

Full Papers
Paper Nr: 14
Title:

An Approach to Evaluate the Impact on Travel Time of Bus Network Changes

Authors:

Kathrin Rodríguez Llanes and Marco A. Casanova

Abstract: This paper proposes an approach to evaluate the impact of bus network changes on bus travel time. The approach relies on data obtained from buses equipped with GPS devices, which act as mobile traffic sensors. It involves three main steps: (1) analysis of the bus network to determine which road segments are frequently traversed by buses; (2) computation of bus travel time patterns by segment; (3) evaluation of how much the bus travel time patterns vary when bus network changes take place. The approach combines graph algorithms and geospatial data mining techniques. It can be applied to cities served by a dense bus network, where buses are equipped with active GPS devices that continuously transmit their position. The paper applies the proposed approach to evaluate how bus travel time patterns in the City of Rio de Janeiro were affected by traffic changes implemented mostly for the Rio 2016 Olympic Games.

Paper Nr: 29
Title:

An Approach to Collaborative Management of Informal Projects

Authors:

Luma Ferreira

Abstract: Informal projects, such as elaborating a schoolwork, organizing a social event, and planning a trip, are performed commonly in groups. Current approaches of project management are complex, rigorous, and do not explicitly exhibit the flexibility to manage informal projects. We propose an approach based on cooperative work concepts in order to support the participation of members in the collaborative management of informal projects. The proposed approach requires a tool support for the collaborative management. Aiming to verify the effectiveness of the approach, we developed and employed a mobile application to support collaborative management of informal projects. Two case studies were conducted to investigate if the approach, along with the tool, assists in the management activities and encourages participation of project’s members. Our preliminary results show that the approach improves planning and monitoring and control during project management, encourages the participation of members and helps in the recognition of members compared to existing approaches.

Paper Nr: 33
Title:

News Dissemination on Twitter and Conventional News Channels

Authors:

Agrima Seth and Shraddha Nayak

Abstract: Big Data is "things that one can do at a large scale that cannot be done at a small one". Analyzing flows of news events that happen worldwide falls in the scope of Big Data. Twitter has emerged as a valuable source of information where users post their thoughts on news events at a huge scale. At the same time traditional media channels also produce huge amount of data. This paper presents means to compare the propagation of the same news topic through Twitter and news articles, both important yet varied sources. We present visual means based on maps to make it possible to visualize the flow of information at different level of temporal granularity. We also provide an example and how the flow can be interpreted.

Paper Nr: 34
Title:

A Characterization of Cloud Computing Adoption based on Literature Evidence

Authors:

Antonio Carlos Marcelino de Paula

Abstract: Context:The cloud computing paradigm has received increasing attention because of its claimed financial and functional benefits. This paradigm is based on a customizable and resourceful platform to deploy software. A number of competing providers can support organizations to access computing services without owning the corresponding infrastructure. However, the migration of information systems and the adoption of this paradigm is not a trivial task. For this reason, evidences from the literature reporting and analyzing experiences in this migration should be widely disseminated and organized to be used by companies and by the research community. Goal:Characterize main strategies and methodologies reported in the literature to describe and analyze the adoption and migration to cloud computing. Method: The characterization followed a four-phase approach having as a start point the selection of studies published in conferences and journals. Results: Data gathered from these studies reveal a tendency for companies to choose the public deployment model, the IAAS service model, the amazon platform, and how the most important characteristics in the cloud adoption decision are cost, performance, and security and privacy. Conclusion: Due to the variety of strategies, approaches and tools reported in the primary studies, it is expected that the results in this characterization study would help in establishing knowledge on how the companies should adopt and migrate to the cloud. These findings can be a useful reference to develop guidelines for an effective use of cloud computing.

Paper Nr: 47
Title:

Contact Deduplication in Mobile Devices using Textual Similarity and Machine Learning

Authors:

Eduardo N. Borges

Abstract: This paper presents a method that identifies duplicate contacts, i.e., records representing the same person or organization, automatically collected from multiple data sources. Contacts are compared using similarity functions, which scores are combined by a classification model based on decision trees, avoiding the need for an expert to manually configure similarity thresholds. The experiments show that the proposed method identified correctly up to 92% of duplicate contacts.

Paper Nr: 54
Title:

Using a Time based Relationship Weighting Criterion to Improve Link Prediction in Social Networks

Authors:

C. P. M. T. Muniz

Abstract: For the last years, a considerable amount of attention has been devoted to the research about the link prediction (LP) problem in complex networks. This problem tries to predict the likelihood of an association between two not interconnected nodes in a network to appear in the future. Various methods have been developed to solve this problem. Some of them compute a compatibility degree (link strength) between connected nodes and apply similarity metrics between non-connected nodes in order to identify potential links. However, despite the acknowledged importance of temporal data for the LP problem, few initiatives investigated the use of this kind of information to represent link strength. In this paper, we propose a weighting criterion that combines the frequency of interactions and temporal information about them in order to define the link strength between pairs of connected nodes. The results of our experiment with traditional weighted similarity metrics in ten co-authorship networks confirm our hypothesis that weighting links based on temporal information may, in fact, improve link prediction. Proposed criterion formulation, experimental procedure and results from the performed experiment are discussed in detail.

Paper Nr: 132
Title:

A Strategy for Selecting Relevant Attributes for Entity Resolution in Data Integration Systems

Authors:

Gabrielle Karine Canalle

Abstract: Data integration is an essential task for achieving a unified view of data stored in heterogeneous and distributed data sources. A key step in this process is the Entity Resolution, which consists of identifying instances that refer to the same real-world entity. In general, similarity functions are used to discover equivalent instances. The quality of the Entity Resolution result is directly affected by the set of attributes selected to be compared. However, such attribute selection can be challenging. In this context, this work proposes a strategy for selection of relevant attributes to be considered in the process of Entity Resolution, more precisely in the instance matching phase. This strategy considers characteristics from attributes, such as quantity of duplicated and null values, in order to identify the most relevant ones for the instance matching process. In our experiments, the proposed strategy achieved good results for the Entity Resolution process. Thus, the attributes classified as relevant were the ones that contributed to find the greatest number of true matches with a few incorrect matches.

Paper Nr: 136
Title:

A Study on the Relationship between Internal and External Validity Indices Applied to Partitioning and Density-based Clustering Algorithms

Authors:

Caroline Tomasini and Eduardo N. Borges

Abstract: Measuring the quality of data partitions is essential to the success of clustering applications. A lot of different validity indices have been proposed in the literature, but choosing the appropriate index for evaluating the results of a particular clustering algorithm remains a challenge. Clustering results can be evaluated using different indices based on external or internal criteria. An external criterion requires a partitioning of the data previously defined for comparison with the clustering results while an internal criterion evaluates clustering results considering only the data proprieties. This paper proposes a method that helps the user for selecting the most suitable cluster validity internal index applied on the results of partitioning and density-based clustering algorithms. We have looked into the relationships between internal and external indexes, relating them through linear regression and regression model trees. Each algorithm was run over synthetic datasets generated for this purpose, using different configurations. Experiments results point out that \textit{Silhouette} and \textit{Gamma} are the most suitable indices for evaluating both the datasets with compactness propriety and the datasets with multiple density.

Paper Nr: 137
Title:

Assessing the Impact of Stemming Algorithms Applied to Judicial Jurisprudence - An Experimental Analysis

Authors:

Robert A. N. de Oliveira and Methanias Colaço Júnior

Abstract: Stemming algorithms are commonly used during textual preprocessing phase in order to reduce data dimensionality. However, this reduction presents different efficacy levels depending on the domain it is applied. Hence, this work is an experimental analysis about the dimensionality reduction by stemming a veracious base of judicial jurisprudence formed by four subsets of documents. With such document base, it is necessary to adopt techniques that increase the efficiency of storage and search for such information, otherwise there is a loss of both computing resources and access to justice, as stakeholders may not find the document they need to plead their rights. The results show that, depending on the algorithm and the collection, there may be a reduction of up to 52\% of these terms in the documents. Furthermore, we have found a strong correlation between the reduction percentage and the quantity of unique terms in the original document. This way, RSLP algorithm was the most effective in terms of dimensionality reduction, among the stemming algorithms analyzed, in the four collections studied and it was excelled when applied to judgments of Appeals Court.

Paper Nr: 140
Title:

On Top-K Queries Over Evidential Data

Authors:

Fatma Ezzahra Bousnina, Mouna Chebbah and Mohamed Anis Bach Tobji

Abstract: Uncertain data are obvious in a lot of domains such as sensor networks, multimedia, social media, etc. Top-k queries provide ordered results according to a defined score. This kind of queries represents an important tool for exploring uncertain data. Most of works cope with certain data and with probabilistic top-k queries. However, at the best of our knowledge there is no work that exploits the Top-k semantics in the Evidence Theory context. In this paper, we introduce a new score function suitable for Evidential Data. Since the result of the score function is an interval, we adopt a comparison method for ranking intervals. Finally we extend the usual semantics/interpretations of top-k queries to the evidential scenario.

Paper Nr: 152
Title:

EE-(m,k)-Firm: A Method to Dynamic Service Level Management in Enterprise Environment

Authors:

Bechir Alaya

Abstract: Due to enterprises environment specificities, the operations management is actually a challenging problem. In this paper, we choose to using a compromise between the available resources and the quality of service (QoS) granularity. We join this compromise to a guaranteed technique in order to reach an intelligent loss of Sub-operations according to the importance of each operation. The resulting approach permits the increase of availability, performance, reliability and system dependability. The aim of our contribution is to ensure the client satisfaction by increasing the QoS while dealing with the enterprises environment characteristics. The effectiveness of what we propose is measured through a simulation study.

Paper Nr: 202
Title:

Formal Concept Analysis Applied to Professional Social Networks Analysis

Authors:

Paula R. C. Silva, Sérgio M. Dias and Wladmir C. Brandão

Abstract: From the recent proliferation of online social networks, a set of specific type of social network is attracting more and more interest from people all around the world. It is professional social networks, where the users’ interest is oriented to business. The behavior analysis of this type of user can generate knowledge about competences that people have been developed in their professional career. In this scenario, and considering the available amount of information in professional social networks, it has been fundamental the adoption of effective computational methods to analyze these networks. The formal concept analysis (FCA) has been a effective technique to social network analysis (SNA), because it allows identify conceptual structures in data sets, through conceptual lattice and implication rules. Particularly, a specific set of implications rules, know as proper implications, can represent the minimum set of conditions to reach a specific goal. In this work, we proposed a FCA-based approach to identify relations among professional competences through proper implications. The experimental results, with professional profiles from LinkedIn and proper implications extracted from PropIm algorithm, shows the minimum sets of skills that is necessary to reach job positions.

Paper Nr: 206
Title:

Guildlines of Data Quality Issues for Data Integration in the Context of the TPC-DI Benchmark

Authors:

Qishan Yang

Abstract: Nowadays, many business intelligence or master data management initiatives are based on regular data integration, since data integration intends to extract and combine a variety of data sources, it is thus considered as a prerequisite for data analytics and management. More recently, TPC-DI is proposed as an industry benchmark for data integration. It is designed to benchmark the data integration and serve as a standardisation to evaluate the ETL performance. There are a variety of data quality problems such as multi-meaning attributes and inconsistent data schemas in source data, which will not only cause problems for the data integration process but also affect further data mining or data analytics. This paper has summarised typical data quality problems in the data integration and adapted the traditional data quality dimensions to classify those data quality problems. We found that data completeness, timeliness and consistency are critical for data quality management in data integration, and data consistency should be further defined in the pragmatic level. In order to prevent typical data quality problems and proactively manage data quality in ETL, we proposed a set of practical guidelines for researchers and practitioners to conduct data quality management in data integration.

Paper Nr: 215
Title:

An Accurate Tax Fraud Classifier with Feature Selection based on Complex Network Node Centrality Measure

Authors:

Tales Matos and José Antonio F. de Macedo

Abstract: Fiscal evasion represents a very serious issue in many developing countries. In this context, tax fraud detection constitutes a challenging problem, since fraudsters change frequently their behaviors to circumvent existing laws and devise new kinds of frauds. Detecting such changes proves to be challenging, since traditional classifiers fail to select features that exhibit frequent changes. In this paper we provide two contributions that try to tackle effectively the tax fraud detection problem: first, we introduce a novel feature selection algorithm, based on complex network techniques, that is able to capture determinant fraud indicators -- over time, this kind of indicators turn out to be more stable than new fraud indicators. Secondly, we propose a classifier that leverages the aforementioned algorithm to accurately detect tax frauds. In order to prove the validity of our contributions we provide an experimental evaluation, where we use real-world datasets, obtained from the State Treasury Office of Cear{\'a} (SEFAZ-CE), Brazil, to show how our method is able to outperform, in terms of F1 scores achieved, state-of-the-art approaches available in the literature.

Paper Nr: 239
Title:

fgssjoin: A GPU-based Algorithm for Set Similarity Joins

Authors:

Rafael D. Quirino and Sidney R. Junior

Abstract: Set similarity join is a core operation for text data integration, cleaning and mining. Most state-of-the-art solutions rely on inherently sequential, CPU-based algorithms. In this paper we propose a parallel algorithm for the set similarity join problem, harnessing the power of GPU systems through filtering techniques and divide-and-conquer strategies that scales well with data size. Experiments show substantial speedups over the fastest algorithms in literature.

Paper Nr: 242
Title:

Extensions, Analysis and Experimental Assessment of a Probabilistic Ensemble-learning Framework for Detecting Deviances in Business Process Instances

Authors:

Alfredo Cuzzocrea and Francesco Folino

Abstract: This paper significantly extends a previous proposal where an innovative ensemble-learning framework for mining business process deviances that exploits multi-view learning has been provided. Here, we introduce some relevant contributions: (i) a further learning method that extends and refines the previous methods via introducing the idea of probabilistically combining different deviance detection models (DDMs); (ii) a complete conceptual architecture that implements the extended multi-view ensemble-learning framework; (iii) a wide and comprehensive experimental assessment of the framework, even in comparison with existent competitors. The investigated scientific context falls in the so-called Business Process Intelligence (BPI) research area, which is relevant for a wide number of real-life applications. These novel contributions clearly confirm the flexibility, the reliability and the effectiveness of the general deviance detection framework, respectively.

Short Papers
Paper Nr: 26
Title:

Dynamic Indexing for Incremental Entity Resolution in Data Integration Systems

Authors:

Priscilla Kelly M. Vieira

Abstract: Entity Resolution (ER) is the problem of identifying groups of tuples from one or multiple data sources that represent the same real-world entity. This is a crucial stage of data integration processes, which often need to integrate data at query time. This task becomes even more challenging in scenarios with dynamic data sources or with a large volume of data. As most ER techniques deal with all tuples at once, new solutions have been proposed to deal with large volumes of data. One possible approach consists in performing the ER process on query results rather than the whole data set. It is also possible to reuse previous results of ER tasks in order to reduce the number of comparisons between pairs of tuples at query time. In a similar way, indexing techniques can also be employed to help the identification of equivalent tuples and to reduce the number of comparisons between pairs of tuples. In this context, this work proposes an indexing technique for incremental Entity Resolution processes. The expected contributions of this work are the specification, the implementation and the evaluation of the proposed indexes. We performed some experiments and the time spent for storing, accessing and updating the indexes was measured. We concluded that the reuse turns the ER process more efficient than the reprocessing of tuples comparison and with similar quality of results.

Paper Nr: 31
Title:

CrimeVis: An Interactive Visualization System for Analyzing Crime Data in the State of Rio de Janeiro

Authors:

Luiz José Schirmer Silva, Sonia Fiol González and Cassio F. P. Almeida

Abstract: This paper presents an interactive graphic visualization system for analyzing criminal data in the State of Rio de Janeiro, provided by the Public Safety Institute of Rio de Janeiro. The system comprises a set of integrated tools for visualizing and analyzing statistical data on crimes, which makes it possible to extract and infer relevant information regarding government policies on public safety and their effects. The tools allow us to visualize multidimensional data, spatiotemporal data, and multivariate data in an integrated manner using brushing and linking techniques. The paper also presents a case study to evaluate the set of tools we developed.

Paper Nr: 46
Title:

Enabling Business Domain-Specific eCollaboration - How to Integrate Virtual Cooperation in Product Costing

Authors:

Diana Lück

Abstract: Due to digitalization and the rising relevance of knowledge work, virtual cooperation in enterprises is increasingly important. Product costing is an example of a business domain that is characterized by a high demand for communication, coordination, and information exchange. Time and location-based restrictions underline the necessity of computer-assisted support in collaboration. However, an approach to integrate IT-support for virtual cooperation directly into the core process of business domains like product costing is still missing. To overcome this challenge, we show how to enable Business Domain-Specific eCollaboration based on the design principles for integrated virtual cooperation in product costing. This paper presents how to combine collaboration support directly with the process of this particular business domain.

Paper Nr: 116
Title:

Logical Unified Modeling for NoSQL Databases

Authors:

Fatma Abdelhedi and Amal Ait Brahim

Abstract: NoSQL data stores are becoming widely used to handle Big Data; these systems operate on schema-less data model enabling users to incorporate new data into their applications without using a predefined schema. But, there is still a need for a conceptual model to define how data will be structured in the database. In this paper, we show how to store Big Data described by conceptual model within NoSQL systems. For this, we use the Model Driven Architecture (MDA) that provides a framework for models automatic transformation. Starting from a conceptual model describing a set of complex objects, we propose transformation rules formalized with QVT to generate NoSQL physical models. To ensure efficient automatic transformation and to limit the impacts related to technical aspects of NoSQL systems, we propose a generic logical model that is compatible with the three types of NoSQL systems (column, document and graph). We provide experiments of our approach using a case study related to the health care field. The results of our experiments show that the proposed logical model can be effectively transformed into different NoSQL physical models independently of their specific details.

Paper Nr: 168
Title:

Towards a Data-oriented Optimization of Manufacturing Processes - A Real-Time Architecture for the Order Processing as a Basis for Data Analytics Methods

Authors:

Matthias Blum and Guenther Schuh

Abstract: Real-time data analytics methods are key elements to overcome the currently rigid planning and improve manufacturing processes by analysing historical data, detecting patterns and deriving measures to counteract the issues. The key element to improve, assist and optimize the process flow builds a virtual representation of a product on the shop-floor - called the digital twin or digital shadow. Using the collected data requires a high data quality, therefore measures to verify the correctness of the data are needed. Based on the described issues the paper presents a real-time reference architecture for the order processing. This reference architecture consists of different layers and integrates real-time data from different sources as well as measures to improve the data quality. Based on this reference architecture, deviations between plan data and feedback data can be measured in real-time and countermeasures to reschedule operations can be applied.

Paper Nr: 210
Title:

Integrated Analytics for Application Management using Stream Clustering and Semantics

Authors:

M. Omair Shafiq

Abstract: Large-scale software applications produce enormous amount of execution data in the form of logs which makes it challenging for managing execution of such applications. There have been several semantically enhanced analytical solutions proposed for enhanced monitoring and management of software applications. In this paper, author proposes a customized semantic model for representing application execution, and a scalable stream clustering based processing solution. The stream clustering based approach acts as key to combine all the other analytical solutions using the proposed customized semantic model for logs. The proposed approach works in an integrated manner that clusters log data that is produced, as a result of events occurring during execution, at a large-scale and in a continuous streaming manner for managing execution of software applications. The proposed solution utilizes semantics for better expressiveness of log events, other related data and analytical approaches, through stream clustering based integrated approach, to process logs that helps in enhancing the process of monitoring and management of software applications. This paper presents the customized semantic logging model for scalable stream clustering, algorithm design and discussion on scalable stream clustering based solution and its integration with other analytical solutions. The paper also presents experimentation, evaluation and demonstrates applicability of the proposed solution.

Paper Nr: 221
Title:

CSL: A Combined Spanish Lexicon - Resource for Polarity Classification and Sentiment Analysis

Authors:

Luis G. Moreno-Sandoval, Paola Beltrán-Herrera, Jaime A. Vargas-Cruz, Carolina Sánchez-Barriga and Alexandra Pomares-Quimbaya

Abstract: Opinion mining and sentiment analysis in texts from social networks such as Twitter has taken great importance during the last decade. Quality lexicons for the sentiment analysis task are easily found in languages such as English; however, this is not the case in Spanish. For this reason, we propose CSL, a Combined Spanish Lexicon approach for sentiment analysis that uses an ensemble of six lexicons in Spanish and a weighted bag of words strategy. In order to build CSL we used 68,019 tweets previously classified by researchers at the Spanish Society of Natural Language Processing (SEPLN) obtaining a precision of 62.05 and a recall of 60.75 in the validation set, showing improvements in both measurements. Additionally, we compare the results of CSL with a very well-known commercial software for sentiment analysis in Spanish finding an improvement of 10 points in precision and 15 points in recall.

Paper Nr: 223
Title:

Inventorying Systems: An Action Research

Authors:

Vanessa Soares, Rejane Figueiredo and Elaine Venson

Abstract: Maintainability, characterized by ease of understanding, is strongly related to the availability of correct and update information about the system and the ease of maintenance staff in understanding it. The Brazilian public administration has many legacy systems in maintenance whose documentation is non-existent or incomplete. The main purpose of this work was to identify the inventorying attributes that have to be recorded to support systems maintenance. The methodology applied was action research whose cycles allowed identifying the attributes required by each one of the stakeholders, civil servants and providers to inventory the systems, as well as how to register and access their attributes. As a result, the set of attributes produced by the interaction between researchers and stakeholders was considered essential to the infrastructure and systems areas, managers, and providers.

Paper Nr: 236
Title:

Unity Decision Guidance Management System: Analytics Engine and Reusable Model Repository

Authors:

Mohamad Omar Nachawati

Abstract: Enterprises across all industries increasingly depend on decision guidance systems to facilitate decision-making across all lines of business. Despite significant technological advances, current paradigms for developing decision guidance systems lead to a tight-integration of the analytic models, algorithms and underlying tools that comprise these systems, which inhibits both reusability and interoperability. To address these limitations, this paper focuses on the development of the Unity analytics engine, which enables the construction of decision guidance systems from a repository of reusable analytic models that are expressed in JSONiq. Unity extends JSONiq with support for algebraic modeling using a symbolic computation-based technique and compiles reusable analytic models into lower-level, tool-specific representations for analysis. In this paper, we also propose a conceptual architecture for a Decision Guidance Management System, based on Unity, to support the rapid development of decision guidance systems. Finally, we conduct a preliminary experimental study on the overhead introduced by automatically translating reusable analytic models into tool-specific representations for analysis. Initial results indicate that the execution times of optimization models that are automatically generated by Unity from reusable analytic models are within a small constant factor of that of corresponding, manually-crafted optimization models.

Paper Nr: 254
Title:

Performance Evaluation of Cloud-based RDBMS through a Cloud Scripting Language

Authors:

Andrea S. Charão, Guilherme F. Hoffmann and Luiz A. Steffenel

Abstract: Cloud computing has brought new opportunities, but also new concerns, for developing enterprise information systems. In this work, we investigated the performance of two cloud-based relational database services, accessing them via scripts which also execute on a cloud platform, using Google Apps Script technology. Preliminary results show little differences between the services in their trial versions, considering limitations imposed by the Google platform.

Paper Nr: 255
Title:

Enterprise Systems: The Quality of System Outputs and their Perceived Business Value

Authors:

Ahed Abugabah

Abstract: Organizations are exploring the opportunities offered by information technology to reduce cost and improve overall performance and gain more efficiency. Enterprise Resource Planning Systems are viewed as powerful solutions that help improve productivity, performance and overall quality. However, effective use and beneficial outcomes from such systems are nether guaranteed nor recognized. This study aimed at evaluating the business value of ERP systems and perceived benefits at the user level. This short paper briefly presenting some empirical results related enterprise system impacts and benefits. The reported results in this paper are a part of a larger project investigating business value of Enterprise Resource Planning Systems (ERPs). The focus of this paper is on technical system factors including system features, system quality, information quality and their impacts on business value of ERPs perceived by system users on particular aspects such as efficiency, creativity and effectiveness.

Paper Nr: 259
Title:

Design and Implementation of Falling Star - A Non-Redudant Spatio-Multidimensional Logical Model for Document Stores

Authors:

Ibtisam Ferrahi and Sandro Bimonte

Abstract: In the era of Spatial Big Data, some NoSQL spatial DBMSs have been developed to deal with the Spatiality, Velocity, Variety, and Volume of Spatial Big Data. In this context, some works recently study NoSQL logical Data Warehouse (DW) models. However, these proposals do not investigate storing and querying spatial data. Therefore, in this paper we propose a new logical model for document Spatial DWs. Moreover, motivated by the expressivity, readability and interoperability offered by UML profile, we represent our model using a UML profile. Finally, we present an implementation in document Spatial DBMSs

Paper Nr: 267
Title:

A Method for Gathering and Classification of Scientific Production Metadata in Digital Libraries

Authors:

Elisabete Ferreira and Marcos Sfair Sunye

Abstract: This paper introduces a methodology for the automatic loading of metadata and open access scientific articles, spread out in scientific journals in Institutional Digital Repositories (IDRs), obtained through information’s extraction from the researchers curricula. A further objective is to help the institution for planning the costs required to support the growth of their digital environment considering the scientific data that would be stored in it. The aggregation of scientific production in a single institutional digital environment allows institutions to generate internal indicators of scientific and technological production, conduct studies through the application of data mining tools as well as support the implementation of management policies. For the purpose of implementation, a set of components was developed for collecting scientific articles free of all restrictions on access.

Paper Nr: 303
Title:

A Method forWeb Content Extraction and Analysis in the Tourism Domain

Authors:

Ermelinda Oro and Massimo Ruffolo

Abstract: Big data generated across the web is assuming growing importance in producing insights useful to understand real-world phenomena and to make smarter decisions. The tourism is one of the leading growth sectors, therefore, methods and technologies that simplify and empower web contents gathering, processing, and analysis are becoming more and more important in this application area. In this paper, we present a web content analytics method that automates and simplifies content extraction and acquisition from many different web sources, like newspapers and social networks, accelerate content cleaning, analysis, and annotation, makes faster insights generation by visual exploration of analysis results. We, also, describe an application to a real-world use case regarding the analysis of the touristic impact of the Italian Open tennis tournament. Obtained results show that our method makes the analysis of news and social media posts more easy, agile, and effective.

Paper Nr: 305
Title:

Price Modeling of IaaS Providers - An Approach Focused on Enterprise Application Integration

Authors:

Cássio L. M. Belusso, Sandro Sawicki and Vitor Basto-Fernandes

Abstract: One of the main advances in information technology today is cloud computing. It is a great alternative for users to reduce costs related to the need to acquire and maintain computational infrastructure to develop, implement and execute software applications. Cloud computing services are offered by providers and can be classified into three main modalities: Platform-as-a-Service (PaaS), Software-as-a-Service (SaaS) and Infrastructure-as-a-Service (IaaS). In IaaS, the user has a virtual machine at their disposal with the desired computational resources at a given cost. Generally, the providers offer infrastructure services divided into instances, with pre-established configurations. The main challenge faced by companies is to choose the instance that best fits their needs among the many options offered by providers. Frequently, these companies need a large computational infrastructure to manage and improve their business processes and, due to the high cost of maintaining local infrastructure, they have begun to migrate applications to the cloud in order to reduce these costs. In this paper, we introduce a proposal for price modeling of instances of virtual machines using linear regression. This approach analyzes a set of simplified hypotheses considering the following providers: Amazon EC2, Google Compute Engine and Microsoft Windows Azure.

Paper Nr: 324
Title:

Challenges for Value-driven Semantic Data Quality Management

Authors:

Rob Brennan

Abstract: This paper reflects on six years developing semantic data quality tools and curation systems for both large-scale social sciences data collection and a major web of data hub. This experience has led the author to believe in using organisational value as a mechanism for automation of data quality management to deal with Big Data volumes and variety. However there are many challenges in developing these automated systems and this discussion paper sets out a set of challenges with respect to the current state of the art and identifies a number of potential avenues for researchers to tackle these challenges.

Posters
Paper Nr: 23
Title:

Enterprise Level Security with Homomorphic Encryption

Authors:

Kevin Foltz and William R. Simpson

Abstract: Enterprise Level Security (ELS) is an approach to enterprise information exchange that provides strong security guarantees. It incorporates measures for authentication, encryption, access controls, credential management, monitoring, and logging. ELS has been adapted for cloud hosting using the Virtual Application Data Center (VADC) approach. However, a key vulnerability in placing unprotected data in the cloud is the database that stores each web application’s data. ELS puts controls on the end-to-end connection from requester to application, but an exploit of the back-end database can allow direct access to data and bypass ELS controls at the application. In a public cloud environment the data and web application may be vulnerable to insider attacks using direct hardware access, misconfiguration, and redirection to extract data. Traditional encryption can be used to protect data in the cloud, but it must be transferred out of the cloud and decrypted to perform processing, and then re-encrypted and sent back to the cloud. Homomorphic encryption offers a way to not only store encrypted data, but also perform processing directly on the encrypted values. This paper examines the current state of homomorphic encryption and its applicability to ELS.

Paper Nr: 45
Title:

Mining User Interests for Personalized Tweet Recommendation on Map-Reduce Framework

Authors:

Guanyao Du and Jianying Sun

Abstract: The tremendous growth of micro-blogging systems in recent years poses some key challenges for recommender systems, such as how to process tweet big data under distributed environment, how to striking a balance between high accurate recommendations and efficiency, and how to produce diverse recommendations for millions of users. In our opinion, accurately, instantly, and completely capturing user preferences over time is the key point for personalized tweet recommendation. Therefore, we introduce three features to model personal user interests and its evolution for tweet recommendation, including textual information, user behaviors, and time. We then offer two enhanced recommendation models: Topic-STG (Session-based Temporal Graph) model and SVD (Singular Value Decomposition) model, combining these features to learn user preference and recommend personalized tweet. To further improve the algorithm efficiency for micro-blogging big data, we provide the parallel algorithm implementation for Topic-STG and SVD models based on Hadoop Map-Reduce framework. Experiments on a large scale of micro-blogging dataset illustrate the effectiveness of the proposed models and algorithms.

Paper Nr: 53
Title:

Gathering Formalized Information Requirements of a Data Warehouse

Authors:

Natalija Kozmina

Abstract: Information requirements of a data warehouse (DW) captured in natural language often have a common issue of being ambiguous, inaccurate, or repeating. We offer an approach to formalize DW information requirements based on our experience of using demand-driven methodology for DW conceptual design and distinction between quantifying and qualifying data. In this paper we demonstrate a working prototype of the iReq tool implemented for the purpose of collecting DW information requirements. Graphical user interface (GUI) of the iReq tool conforms to the requirement formalization metamodel acquired as a result of our previous research studies, is intuitive and user-friendly, and allows to define an unlimited number of requirement counterpart elements. The functionality of the iReq tool is wide; it allows deriving a conceptual model of a DW in a semi-automatic manner from gathered information requirements. Due to space limitations, in this paper we cover only such components as GUI for input of the information requirements illustrated with application examples, its underlying formal requirement repository, and a graph database (DB) to represent a glossary of terms for requirement definition.

Paper Nr: 65
Title:

Management and Innovation Models for Digital Identity in Public Sector

Authors:

Nunzio Casalino and Marisa Ciarlo

Abstract: This paper is aimed at analysing the international framework, both European and Italian, for the innovative eID operating models, to identify the guidelines to follow for a correct identification of the operational requirements, of the solutions and of the services offered by the model DIMIM – Digital Identity Management and Innovation Model. After the framework’ analysis, we will go through the definition of a new set of strategic guidelines customised on to the most interesting and relevant sectors identified by the DIMIM. This step will consist in the highlighting, through to the help of tools such as the SWOT analysis and the priority matrix, of the main constraints and opportunities emerging in the implementation process of the eID operational models. The paper at issue is also aimed at identifying a universal, solid and multichannel authentication system, the “IAM”, which will provide each individual with a set of solid and safe digital credentials allowing the access to all the available services, promoting the creation of value-added services.

Paper Nr: 92
Title:

An Analysis of the Impact of Diversity on Stacking Supervised Classifiers

Authors:

Mariele Lanes, Paula F. Schiavo and Sidnei F. Pereira Jr.

Abstract: Due to the growth of research in pattern recognition area, the limits of the techniques used for the classification task are increasingly tested. Thus, it is clear that specialized and properly configured classifiers are quite effective. However, it is not a trivial task to choose the most appropriate classifier for deal with a particular problem and set it up properly. In addition, there is no optimal algorithm to solve all prediction problems. Thus, in order to improve the result of the classification process, some techniques combine the knowledge acquired by individual learning algorithms aiming to discover new patterns not yet identified. Among these techniques, there is the stacking strategy. This strategy consists in the combination of outputs of base classifiers, induced by several learning algorithms using the same dataset, by means of another classifier called meta-classifier. This paper aims to verify the relation between the classifiers diversity and the quality of stacking. We have performed a lot of experiments which results show the impact of multiple diversity measures on the gain of stacking.

Paper Nr: 95
Title:

Integrating BI Information into ERP Processes - Describing Enablers

Authors:

Richard Russman

Abstract: Business Intelligence (BI) systems typically report on transactions executed in enterprise information systems such as Enterprise Resource Planning (ERP) systems. Reporting is normally at managerial or executive level, yet substantial benefits can accrue to organizations that successfully integrate BI information back into ERP processing at an operational level. How this integration is enabled is not well understood or researched. In this paper, a multiple case study considering three organizations, factors enabling this integration are described and a process framework is presented indicating the importance of these enablers and the sequence in which these factors need to be considered. New factors not initially considered in the literature emerged such as including big data, using in memory BI and using the same vendor for ERP and BI. However, unless integrating BI into ERP processing is appropriate for an organization, benefits will not necessarily accrue.

Paper Nr: 169
Title:

Extraction of Conservative Rules for Translation Initiation Site Prediction using Formal Concept Analysis

Authors:

Leandro M. Ferreira, Cristiano L. N. Pinto and Sérgio M. Dias

Abstract: The search for conservative features that define the translation and transcription processes used by cells to interpret and express their genetic information is one of the great challenges in the molecular biology. Each transcribed mRNA sequence has only one part translated into proteins, called \textit{Coding Sequence}. The detection of this region is what motivates the search for conservative characteristics in an mRNA sequence. In eukaryotes, this region usually begins with the first occurrence of the sequence of 3 nucleotides, being Adenine, Thymine and Guanine, the nucleotide set that it is called Translation Initiation Site. One way to look for conservative rules that define this region is to use the formal analysis of concepts that can have implications that indicate a coexistence between the positions of the sequence with the presence of the translation start site. This papers tries to study the use of this technique to extract conservative rules in order to predict the translation initiation site.

Paper Nr: 175
Title:

Estimating Reference Evapotranspiration using Data Mining Prediction Models and Feature Selection

Authors:

Hinessa Dantas Caminha and Ticiana Coelho da Silva

Abstract: Since the irrigated agriculture is the most water-consuming sector in Brazil, it is a challenge to use water in a sustainable way. Evapotranspiration is the combination process of transferring moisture from the earth to the atmosphere by evaporation and transpiration from plants. By estimating this rate of loss, farmers can efficiently manage the crop water requirement and how much water is available. In this work, we propose prediction models, which can estimate the evapotranspiration based on climatic data collected by an automatic meteorological station. Climatic data are multidimensional, therefore by reducing the data dimensionality, then irrelevant, redundant or non-significant data can be removed from the results. In this way, we consider in the proposed solution to apply feature selection techniques before generating the prediction model. Thus, we can estimate the reference evapotranspiration according to the collected climatic variables. The experiments results concluded that models with high accuracy can be generated by M5' algorithm with feature selection techniques.

Paper Nr: 229
Title:

Experimental Evaluation of Automatic Tests Cases in Data Analytics Applications Loading Procedures

Authors:

Igor Peterson Oliveira Santos and Juli Kelle Góis Costa

Abstract: Business Intelligence (BI) relies on Data Warehouse (DW), a historical data repository designed to support the decision making process. Despite the potential benefits of a DW, data quality issues prevent users from realizing the benefits of a BI environment and Data Analytics. Problems related to data quality can arise in any stage of the ETL (Extract, Transform and Load) process, especially in the loading phase. This Paper presents an approach to automate the selection and execution of previously identified test cases for loading procedures in BI environments based on DW. To verify and validate the approach, a unit tests framework was developed. The overall goal is achieve efficiency improvement. The specific aim is reduce test effort and, consequently, promote test activities in data warehousing process. A controlled experiment evaluation in the industry carried out to investigate the adequacy of the proposed method against a generic framework for DW procedures development. Constructed specifically for database application tests, DbUnit was the generic framework chosen for the experiment by convenience of the programmers. The experiment's results show that our approach clearly reduces test effort when compared with execution of test cases using a generic framework.

Paper Nr: 244
Title:

Semi-automated Business Process Model Matching and Merging Considering Advanced Modeling Constraints

Authors:

Markus C. Beutel and Vasil Borozanov

Abstract: Model merging helps to manage the combination and coevolution of business processes. Combining models (semi-)automatically can be a helpful technique in manifold areas and has been investigated since decades by the scientific community. The rising complexity of (business-) processes in shifting environments demands for a more differentiated view on model matching and merging techniques. In this domain, we identified the problem of considering additional constraints in the matching and merging process and suggest an approach by adapting state of the art solutions correspondingly. In addition, we state necessary reduction rules and discuss their suitability. Moreover we provide a prototypical implementation of a matching and merging tool, which allows further investigations of the approach concerning quality, usefulness and efficiency.

Paper Nr: 262
Title:

Graph Databases: Neo4j Analysis

Authors:

José Guia

Abstract: The volume of data is growing at an increasing rate. This growth is both in size and in connectivity, where connectivity refers to the increasing presence of relationships between data. Social networks such as Facebook and Twitter store and process petabytes of data each day. Graph databases have gained renewed interest in the last years, due to their applications in areas such as the Semantic Web and Social Network Analysis. Graph databases provide an effective and efficient solution to data storage and querying data in these scenarios, where data is rich in relationships. In this paper, it is analyzed the fundamental points of graph databases, showing their main characteristics and advantages. We study Neo4j, the top graph database software in the market and evaluate its performance using the Social Network Benchmark (SNB).

Paper Nr: 313
Title:

Beyond Nolan’s Nine-stage Model - Evolution and Value of the Information System of a Technical Office in a Furniture Factory

Authors:

Andrés Boza and Javier Llobregat

Abstract: This paper reviews the evolution of information systems. Nolan’s Model has been reviewed and a new Smart Era seems to be arising. The model has been used to analyse the development stages of a technical office’s information system in a furniture factory. The necessarily changing business model in the company throughout the ages has been analysed from the perspective of the contribution of the technical office’s information system to its main business process.

Area 2 - Artificial Intelligence and Decision Support Systems

Full Papers
Paper Nr: 38
Title:

Combining Machine Learning with a Genetic Algorithm to Find Good Complier Optimizations Sequences

Authors:

Nilton Luiz Queiroz Junior

Abstract: Artificial Intelligence is a strategy applied in several problems in computer science. One of them is to find good compilers optimizations sequences for programs. Currently, strategies such as Genetic Algorithms and Machine Learning have been used to solve it. This article propose an approach that combines both, Machine Learning and Genetic Algorithms, to solve this problem. The obtained results indicate that the proposed approach achieves performance up to 3.472% over Genetic Algorithms and 4.94% over Machine Learning.

Paper Nr: 52
Title:

Unsupervised Segmentation of Nonstationary Data using Triplet Markov Chains

Authors:

Mohamed El Yazid Boudaren, Emmanuel Monfrini and Kadda Beghdad Bey

Abstract: An important issue in statistical image and signal segmentation consists in estimating the hidden variables of interest. For this purpose, various Bayesian estimation algorithms have been developed, particularly in the framework of hidden Markov chains, thanks to their efficient theory that allows one to recover the hidden variables from the observed ones even for large data. However, such models fail to handle nonstationary data in the unsupervised context. In this paper, we show how the recent triplet Markov chains, which are strictly more general models with comparable computational complexity, can be used to overcome this limit through two different ways: (i) in a Bayesian context by considering the switches of the hidden variables regime depending on an additional Markov process; and, (ii) by introducing Dempster-Shafer theory to model the lack of precision of the hidden process prior distributions, which is the origin of data nonstationarity. Furthermore, this study analyzes both approaches in order to determine which one is better-suited for nonstationary data. Experimental results are shown for sampled data and noised images.

Paper Nr: 131
Title:

A Recommendation System for Enhancing the Personalized Search Itineraries in the Public Transportation Domain

Authors:

Aroua Essayeh and Mourad Abed

Abstract: In traditional transport information systems, the users must explicitly provide the information related to both their profiles and travels to receive a personalized response. However, this requires, among others, an extra effort from user in term of search time. We aim to identify not only implicitly users’ information, but also to anticipate their need even if some data are missing through a recommender system based on collaborative filtering technique. In this work, the information related to users is represented using the ontology which proved far more adequate model for representing semantically data.

Paper Nr: 174
Title:

Developer Modelling using Software Quality Metrics and Machine Learning

Authors:

Franciele Beal

Abstract: Software development has become an essential activity for organizations that increasingly rely on these to manage their business. However, poor software quality reduces customer satisfaction, while high-quality software can reduce repairs and rework by more than 50 percent. Software development is now seen as a collaborative and technology-dependent activity performed by a group of people. For all these reasons, choosing correctly software development members teams can be decisive. Considering this motivation, classifying participants in different profiles can be useful during project management team’s formation and tasks distribution. This paper presents a developer modeling approach based on software quality metrics. Quality metrics are dynamically collected. Those metrics compose the developer model. A machine learning-based method is presented. Results show that it is possible to use quality metrics to model developers.

Paper Nr: 199
Title:

Anomaly Detection in Real-Time Gross Settlement Systems

Authors:

Ron Triepels

Abstract: We discuss how an autoencoder can detect system-level anomalies in a real-time gross settlement system by reconstructing a set of liquidity vectors. A liquidity vector is an aggregated representation of the underlying payment network of a settlement system for a particular time interval. Furthermore, we evaluate the performance of two autoencoders on real-world payment data extracted from the TARGET2 settlement system. We do this by generating different types of artificial bank runs in the data and determining how the autoencoders respond. Our experimental results show that the autoencoders are able to detect unexpected changes in the liquidity flows between banks.

Paper Nr: 232
Title:

Inference Approach to Enhance a Portuguese Open Information Extraction

Authors:

Cleiton Fernando Lima Sena

Abstract: Open Information Extraction (Open IE) enables the extraction of facts in large quantities of texts written in natural language. Despite the fact that almost research has been doing in English texts, methods and techniques for other languages have been less frequent. However, those languages other than English correspond to 48% of content available on websites around the world. In this work, we propose a method for extracting facts in Portuguese without pre-determining the types of the facts. Additionally, we increased the quantity of those extracted facts by the use of an inference approach. Our inference method is composed of two issues: a transitive and a symmetric mechanism. To the best of our knowledge, this is the first time that inference approach is used to extract facts in Portuguese texts. Our proposal allowed an increase of 36% in quantity of valid facts extracted in a Portuguese Open IE system, and it is compatible in the quality of facts with English approaches.

Paper Nr: 263
Title:

Traffic Accidents Analysis using Self-Organizing Maps and Association Rules for Improved Tourist Safety

Authors:

Andreas Gregoriades and Andreas Christodoulides

Abstract: Traffic accidents is the most common cause of injury among tourists. This paper presents a method and a tool for analysing historical traffic accident records using data mining techniques for the development of an application that warns tourist drivers of possible accident risks. The knowledge necessary for the specification of the application is based on patterns distilled from spatiotemporal analysis of historical accidents records. Raw accident obtained from Police records, underwent pre-processing and subsequently was integrated with secondary traffic-flow data from a mesoscopic simulation. Two data mining techniques were applied on the resulting dataset, namely, clustering with self-organizing maps (SOM) and association rules. The former was used to identify accident black spots, while the latter was applied in the clusters that emerged from SOM to identify causes of accidents in each black spot. Identified patterns were utilized to develop a software application to alert travellers of imminent accident risks, using characteristics of drivers along with real-time feeds of drivers’ geolocation and environmental conditions.

Paper Nr: 275
Title:

Decision Making Support in the Scheduling of Chemotherapy Coping with Quality of Care, Resources and Ethical Constraints

Authors:

Christophe Ponsard, Renaud De Landtsheer and Yoann Guyot

Abstract: The scheduling of clinical pathways such as oncological treatments involves a tricky decision process because the therapeutic regimens require to respect strict timing constraints with possibly limited resources such as beds and caregivers availability with an increasing number of patients. Such constraints must be met simultaneously for every patient treated at the same time, by making the best use of limited hospital resources. The scheduling must also be robust in case of adverse events such as unexpected delays or partial treatment deliveries due to their toxicity. In this paper, we show how such a decision process can be driven by care quality indicators to ensure all the dimensions. We demonstrate how constraint-based local search techniques can cope with real-world size chemotherapy pathways and efficiently adapt to changes. We also share some ethical concerns about the way the objective function is expressed and more generally about how the tool integrates in the medical decision process.

Paper Nr: 326
Title:

Question’s Advisor - A Wizard Interface to Teach Novice Programmers How to Post “Better” Questions in Stack Overflow

Authors:

José Remígio, Franck Aragão and Cleyton Souza

Abstract: Programmers often recur for online communities in order to find help for a current problem that they are facing. However, after sharing a question, its author has no guarantee if he will receive an answer, neither when. Recent studies have found that low quality is one of the top reasons why questions remain unanswered. In this work, we conducted a qualitative study aiming identifying what programmers are looking in a question that they decide to answer. Based on this feedback, we designed a tool to help programmers to write high quality questions. We named the app Questions’ Advisor, due his role of helping but without forcing the user to follow it, and it is available for desktop and mobile clients. We believe it could be very helpful, especially for novice programmers.

Short Papers
Paper Nr: 56
Title:

Composite Alternative Pareto Optimal Recommender System (CAPORS)

Authors:

William Jeffries and Alexander Brodsky

Abstract: We propose a methodology and present a system for generating composite alternative recommendations combining user-guided continuous improvement with Pareto optimal trade-off considerations. The system consists of (1) a model to generate the recommendation space; (2) metrics for measuring each recommendation; (3) an analytics function for computing composite alternative metrics and constraints; (4) system configuration settings; (5) an algorithm for calculating Pareto optimal curve of recommendations; (6) an algorithm for generating user-guided improvements using relaxed constraints; (6) charting functionality for plotting recommendations; (7) and a user interface for enabling users to accept or improve-upon selected recommendations.

Paper Nr: 109
Title:

Exploring Text Classification Configurations - A Bottom-up Approach to Customize Text Classifiers based on the Visualization of Performance

Authors:

Alejandro Gabriel Villanueva Zacarias

Abstract: Automated Text Classification (ATC) is an important technique to support industry expert workers, e.g. in product quality assessment based on part failure reports. In order to be useful, ATC classifiers must entail reasonable costs for a certain accuracy level and processing time. However, there is little clarity on how to customize the composing elements of a classifier for this purpose. In this paper we highlight the need to configure an ATC classifier considering the properties of the algorithm and the dataset at hand. In this context, we develop three contributions: (1) the notion of ATC Configuration to arrange the relevant design choices to build an ATC classifier, (2) a Feature Selection technique named Smart Feature Selection, and (3) a visualization technique, called ATCC Performance Cube, to translate the technical configuration aspects into a performance visualization. With the help of this Cube, business decision-makers can easily understand the performance and cost variability that different ATC Configurations have in their specific application scenarios.

Paper Nr: 120
Title:

Governance Policies in IT Service Support

Authors:

Abhinay Puvvala and Veerendra K. Rai

Abstract: IT Service support provider, whether outsourced or kept in-house, has to abide by the Service Level Agreements (SLA) that are derived from the business needs. Critical for IT Service support provider are the human resources that are expected to resolve tickets. It is essential that the policies, which govern the tickets’ movement amongst these resources, follow the business objectives such as service availability and cost reduction. In this study, we propose an agent based model that represents an IT Service Support system. A vital component in the model is the agent ‘Governor’, which makes policy decisions by reacting to changes in the environment. The paper also studies the impact of various behavioural attributes of the Governor on the service objectives.

Paper Nr: 149
Title:

A Fuzzy Scheduling Mechanism for a Self-Adaptive Web Services Architecture

Authors:

Anderson Francisco Talon and Edmundo Roberto Mauro Madeira

Abstract: The rise of web services have become increasingly more visible. Monitoring these services ensures Quality of Service and it is the basis for verifying and potentially predicting e-contract violations. This paper proposes a fuzzy scheduling mechanism that attempts to predict a possible e-contract violation based on historical data of the provider’s services. Consequently, there is a self-configuration on the architecture that changes service priority, making the provider processes the high priority services before low priority services. This prediction can also helps the self-optimization of the architecture. A decrease of e-contract violations can be observed. Though it is not always possible to predict a failure, the architecture is capable of self-healing by using recovery actions. Comparing the fuzzy scheduling with others known in the literature, an improvement of 31.52% in the e-contracts accomplishment is observed, and a decrease of 35.59% in average response time was achieved. Furthermore, by using the fuzzy scheduling, the overload of the provider was better balanced, varying at most 8.43%, while the variation in other scheduling mechanisms reached 41.15%. The results show that the fuzzy scheduling mechanism is promising.

Paper Nr: 179
Title:

A Novel Clustering-based Approach for SaaS Services Discovery in Cloud Environment

Authors:

Kadda Beghdad Bey and Hassina Nacer

Abstract: Cloud computing is an emerging new computing paradigm in which both software and hardware resources are provided through the internet as a service to users. Software as a Service (SaaS) is one among the important services offered through the cloud that receive substantial attention from both providers and users. Discovery of services is however, a difficult process given the sharp increase of services number offered by different providers. A Multi-agent system (MAS) is a distributed computing paradigm-based on multiple interacting agents- aiming to solve complex problems through a decentralized approach. In this paper, we present a novel approach for SaaS service discovery based on Multi-agents systems in cloud computing environments. More precisely, the purpose of our approach is to satisfy the user’s needs in terms of both result accuracy rate and processing time of the request. To establish the interest of the proposed solution, experiments are conducted on a simulated dataset.

Paper Nr: 183
Title:

A Hybrid Prediction Model Integrating Fuzzy Cognitive Maps with Support Vector Machines

Authors:

Panayiotis Christodoulou

Abstract: This paper introduces a new hybrid prediction model combining Fuzzy Cognitive Maps (FCM) and Support Vector Machines (SVM) to increase accuracy. The proposed model first uses the FCM part to discover correlation patterns and interrelationships that exist between data variables and form a single latent variable. It then feeds this variable to the SVM part to improve prediction capabilities. The efficacy of the hybrid model is demonstrated through its application on two different problem domains. The experimental results show that the proposed model is better than the traditional SVM model and also outperforms other widely used supervised machine-learning techniques like Weighted k-NN, Linear Discrimination Analysis and Classification Trees.

Paper Nr: 196
Title:

Weaknesses of Ant System for the Distributed Job Shop Scheduling Problem

Authors:

Imen Chaouch

Abstract: Globalization has opened up huge opportunities for the plant and industrial investors. The problem of single plant is now more generalised, namely, multi factory problem. This paper deals with the problem of Distributed Job shop Scheduling in multi-factories. The problem solving process consists of finding an effective way to assign jobs to factories then, to generate a good operation schedule. To make this, an Ant System algorithm is implemented. Several numerical experiments are conducted to evaluate the performance of the Ant System algorithm applied to the Distributed Job shop Scheduling, and the results show the shortcoming of the standard Ant System algorithm compared to developed algorithms in the literature.

Paper Nr: 203
Title:

Evaluating Knowledge Representations for Program Characterization

Authors:

João Fabrício Filho

Abstract: Knowledge representation attempts to organize the knowledge of a context in order for automated systems to utilize it to solve complex problems. Among several difficult problems, one worth mentioning is called code-generation, which is undecidable due to its complexity. A technique to mitigate this problem is to represent the knowledge and use an automatic reasoning system to infer an acceptable solution. This article evaluates knowledge representations for program characterization for the context of code-generation systems. The experimental results prove that program Numerical Features as knowledge representation can achieve 85% near to the best possible results. Furthermore, such results demonstrate that an automatic code-generating system, which uses this knowledge representation is capable to obtain performance better than others code-generating systems.

Paper Nr: 208
Title:

Using Evolving Graphs to Evaluate Structural Openness in Multi-Agent Systems

Authors:

Sondes Hattab

Abstract: The evaluation of Multi-Agent Systems (MAS) issue is invoked in the literature in a twofold manner: from an external point of view through the assessment of design methodologies and development tools and platforms or from an internal point of view by measuring the functional characteristics of MAS applications. The latter kind of evaluation is not sufficiently addressed and is mostly oriented towards structural properties. We believe behavioural characteristics may considerably affect MAS performances and have to be assessed in order to judge correctly the quality of the MAS. Thus, our aim is to propose an approach to evaluate one of the most important behavioural characteristics in MAS: openness. We focus especially on structural openness and we suggest for this purpose a three-step method: observation, modelling and measure. The modelling technique is based on an evolving graph whose properties are used to estimate metrics for the evaluation. Then, our approach is tested and validated on a road traffic application.

Paper Nr: 233
Title:

Application of Fuzzy Inference Systems in the Transmission of Wireless Sensor Networks

Authors:

Pedro Henrique Gouvea Coelho, J. F. M. do Amaral and N. N. de Almeida

Abstract: The purpose of this paper is to apply fuzzy logic techniques for determining the data transmission path in wireless sensor networks. Wireless sensor networks are used in many applications and require efficient operation with speedy transmission of information and long lifespan. For suitable sensor transmission, a fuzzy system is defined considering as the goal to minimize the information transmission distance and maximize the battery lifespan of the network routers. Case studies simulations were carried out and the results indicate satisfactory performance of the method.

Paper Nr: 277
Title:

The Multiagent Model for Predicting the Behaviour and Short-term Forecasting of Retail Prices of Petroleum Products

Authors:

Leonid Galchynsky and Andriy Svydenko

Abstract: In this study, we develop a multi-agent system model for the purpose of predicting the behaviour of petroleum product prices using short-term forecasting. Having analysed the issue, we found that the ability of multi-agent models to describe the behaviour of individual market agents along with with the oligopolistic nature of the market makes it possible to describe a long-term cooperation of agents. But the accuracy of short-term price predictions for the multi-agent model is insufficient. According to our hypothesis, this is caused primarily due to the nature of the agent’s heuristic algorithm as well as taking the price indices as the sole input. The accuracy of the price forecast for the multi-agent model in the short term is somewhat inferior to co-integration models and forecasting models based on neural networks that use historical price data of petroleum products. In this paper we have studied a hybrid model containing a certain set of agents, their price reaction is based on the neural network training process for each agent. With this approach it is possible to consider not just the price data from the past, but also such factors as potential threats and market destabilisation. Result comparison between the price obtained through our short-term forecast model and real data shows the former’s advantage over pure multi-agent models, co-integration models and over models forecasting based on neural networks.

Paper Nr: 292
Title:

Detection of Runtime Normative Conflict in Multi-Agent Systems based on Execution Scenarios

Authors:

Mairon Belchior and Viviane Torres da Silva

Abstract: Norms in multi-agent systems are used as a mechanism to regulate the behavior of autonomous and heterogeneous agents and to maintain the social order of the society of agents. Norms define what is permitted, prohibited and obligatory. One of the challenges in designing and managing systems governed by norms is that they can conflict with another. Two norms are in conflict when the fulfillment of one causes the violation the other and vice-versa. Several researches have been proposed mechanisms to detect conflicts between norms. However, there is a kind of normative conflict not investigated yet in the design phase, here called runtime conflicts, that can only be detected if we know information about the runtime execution of the system. This paper presents an approach based on execution scenarios to detect normative conflicts that depends on execution order of runtime events in multi-agent systems.

Paper Nr: 294
Title:

Identifying Innovative Documents: Quo vadis?

Authors:

Ivonne Schröter, Jacob Krüger, Philipp Ludwig and Marcus Thiel

Abstract: The number of new research documents and patents published each year is steadily increasing. Despite this development, identifying innovative documents in a timely manner has received only little attention in research. Nevertheless, this use case is important for companies that strive to keep up with current innovations in their field. However, since existing solutions do not take context and background of the particular firm or researcher into account, they fall short in supporting the user in his search for suitable documents. In this paper, we describe an industrial case study we conducted within sheet-metal working companies and related research institutes in Germany. We i) report a qualitative study on innovation research, ii) provide a list of features that industrial researchers demanded, and iii) discuss implementation challenges for systems that support interactive retrieval of innovative documents. Based on the initial results, we argue that existing systems fall short to provide an integrated workflow. Overall, we discuss how to implement such a system and the corresponding problems.

Paper Nr: 295
Title:

Fuzzy Based Model to Detect Patient’s Health Decline in Ambient Assisted Living

Authors:

Milene Santos Teixeira, Vinicius Maran and João Carlos D. Lima

Abstract: Detecting a decline in the health condition of a patient may still be considered a challenge in Ambient Assisted Living (AAL) since the concept of ‘decline’ is vague and imprecise. In this context, Fuzzy Logic comes as an excellent alternative for AAL systems. This paper presents a model based on Fuzzy logic reasoning in order to identify a possible decline in the patient health condition. In order to achieve this goal, the model considers relevant situations that may somehow impact the patient. To evaluate the model, a case study was developed, showing that the developed model can simulate the human reasoning and be used in an AAL system.

Posters
Paper Nr: 16
Title:

A Neuro-automata Decision Support System for Phytosanitary Control of Late Blight

Authors:

Gizelle Kupac Vianna

Abstract: Foliage diseases in plants can cause a reduction in both quality and quantity of agricultural production. In our work, we designed and implemented a decision support system that may small tomatoes producers in monitoring their crops by automatically detecting the symptoms of foliage diseases. We have also investigated ways to recognize the late blight disease from the analysis of tomato digital images, using a pair of multilayer perceptron neural network. One neural network is responsible for the identification of healthy regions of the tomato leaf, while the other identifies the injured regions. The networks outputs are combined to generate repainted tomato images in which the injuries on the plant are highlighted, and to calculate the damage level at each plant. That levels are then used to construct a situation map of a farm where a cellular automata simulates the outbreak evolution over the fields. The simulator can test different pesticides actions, helping in the decision on when to start the spraying and in the analysis of losses and gains of each choice of action.

Paper Nr: 20
Title:

Load Balancing Heuristic for Tasks Scheduling in Cloud Environment

Authors:

Kadda Beghdad Bey and Farid Benhammadi

Abstract: Distributed systems, a priori intended for applications by connecting distributed entities, have evolved into supercomputing to run a single application. Currently, Cloud Computing has arisen as a new trend in the world of IT (Information Technology). Cloud computing is an architecture in full development and has become a new computing model for running scientific applications. In this context, resource allocation is one of the most challenging problems. Indeed, assigning optimally the available resources to the needed cloud applications is known to be an NP complete problem. In this paper, we propose a new task scheduling strategy for resource allocation that minimizes the completion time (makespan) in cloud computing environment. To show the interest of the proposed solution, experiments are conducted on a simulated dataset.

Paper Nr: 138
Title:

FzMEBN: Toward a General Formalism of Fuzzy Multi-Entity Bayesian Networks for Representing and Reasoning with Uncertain Knowledge

Authors:

Riali Ishak

Abstract: Good representing and reasoning with uncertainty is a topic of growing interest within the community of artificial intelligence (AI). In this context, the Multi-Entity Bayesian Networks (MEBNs) are proposed as a candidate solution. It’s a powerful tool based on the first order logic expressiveness. Furthermore, in the last decade they have shown its effectiveness in various complex and uncertainty-rich domains. However, in most cases the random variables are vague or imprecise by nature, to deal with this problem; we have to extend the standard Multi-Entity Bayesian Networks to improve their capabilities for good representing and reasoning with uncertainty. This paper details a promising solution based on fuzzy logic; it permits to overcome the weaknesses of classical Multi-Entity Bayesian networks. In addition, we have proposed a general process for the inference task. This process contains four steps, (1) Generating a Fuzzy Situation Specific Bayesian Networks, (2) Computing fuzzy evidence, (3) Adding virtual nodes, and (4) finally, the fuzzy probabilistic inference step. Our process is based on the virtual evidence method in order to incorporate the fuzzy evidence in probabilistic inference, moreover, approximate or exact algorithms can be used, and this choice of inference type depends to the contribution of the domain expert and the complexity of the problem. Illustrative examples taken from the literatures are considered to show potential applicability of our extended MEBN.

Paper Nr: 176
Title:

Multi-agent Coordination using Reinforcement Learning with a Relay Agent

Authors:

Wiem Zemzem

Abstract: This paper focuses on distributed reinforcement learning in cooperative multi-agent systems, where several simultaneously and independently acting agents have to perform a common foraging task. To do that, a novel cooperative action selection strategy and a new kind of agents, called "relay agent", are proposed. The conducted simulation tests indicate that our proposals improve coordination between learners and are extremely efficient in terms of cooperation in large, unknown and stationary environments.

Paper Nr: 186
Title:

Mobile Gift Recommendation Algorithm

Authors:

Caíque de Paula Pereira

Abstract: The mobile application market and e-commerce sales have grown steadily, along with the growth of studies and product recommendation solutions implemented in e-commerce systems. In this context, this paper proposes a recommendation algorithm for mobile devices based on the COREL framework. The proposed recommendation algorithm is a customization of the COREL framework, based on the complexity of the implementation associated with iOS mobile applications. Therefore, this work aims to customize a gift recommendation algorithm in the context of mobile devices using as main input the user preferences for the gifts recommendation in the Giftr application.

Paper Nr: 211
Title:

OptiHealth: A Recommender Framework for Pareto Optimal Health Insurance Plans

Authors:

Fernando Boccanera and Alexander Brodsky

Abstract: Choosing a health insurance plan, even when the plans are standardized, is a daunting task. Research has shown that the complexity of the task leads consumers to make non-optimal choices most of the time. While a number of systems were introduced to assist the selection of health insurance plans, they fail to significantly reduce the main causes of poor decisions. To address this problem, this paper proposes OptiHealth, a recommender framework for Pareto optimal selection of health insurance plans. The proposed framework is based on (1) actuarial analysis of medical data and a method to accurately estimate the expected annual cost tailored to specific individuals, (2) finding and presenting a small number of diversified Pareto optimal plans based on key performance indicators, and (3) allowing decision makers to iteratively conduct a trade-off analysis.

Paper Nr: 227
Title:

A Knowledge-based Approach for Personalised Clothing Recommendation for Women

Authors:

Hemilis Joyse Barbosa Rocha, Evandro de Barros Costa and Emanuele Tuane Silva

Abstract: Currently, recommendation system technology has been assumed as a promising approach to contribute to fashion domain in terms of electronic commerce. In this paper, we propose an approach for a clothing personalized recommendation system that is able to help the women to identify appropriate clothing categories together with models linked to clothing images, mainly based on their fashion styles and body types. To achieve this, besides an intelligent user interface, our recommendation approach deals with two main components: the user modeling and the clothing recommendation, which is responsible for recommending fashion clothing items to women. The user modeling is responsible for creating and updating the user model, including two main knowledge-based mechanisms: the first is responsible for automatically identifying the fashion style, and the second is responsible for detecting body type. We evaluated our recommendation approach and preliminary results indicate that it significantly supports the women with choices.

Paper Nr: 265
Title:

A Case-based Approach for Reusing Decisions in the Software Development Process

Authors:

Hércules Antonio do Prado and Edilson Ferneda

Abstract: This paper proposes a process for supporting reuse of decisions during the software development process, involving architectural, technological, or management issues, in order to help reducing time and costs in process. A survey with software engineering professionals was performed aiming at identifying a set of decision-making cases that could be applied to design the process. From the result of this survey, a process was implemented, including related software and procedures that eases the reuse of decisions made during software development projects. Design Rationale techniques were applied to structure the cases that were represented and recovered by means of a Case-Based Reasoning approach. The applicability of this approach was evaluated by means of a two-phase case study. The first one encompassed the construction of the case base using the cases identified previously and the second was focused in the application of the system and its evaluation by means of group dynamics. The focal group was chosen among a set of software engineering experts from companies and universities located in Brasília, the Brazilian capital. Satisfactory results were found with respect to the usefulness of the model to improve the performance of software development when past cases are available.

Paper Nr: 287
Title:

Context-Aware Customizable Routing Solution for Fleet Management

Authors:

Janis Grabis, Žanis Bondars and Jānis Kampars

Abstract: Vehicle routing solutions delivered to companies as packaged applications combine vehicle routing decision-making models and supporting services for data integration, presentation and other functionality. The packaged applications often are tailored to specific needs of their users thought customization methods and mainly focus on the supporting services rather than on modification of the routing models. This paper proposes a method for customization of the routing model as a part of the routing application. The customization method enables companies to incorporate their specific decision-making goals and context into the routing model without redesigning the model itself. The routing model is also capable of adapting its behaviour according to observed interdependencies among decision-making goals and routing context. An illustrative example is provided to demonstrate customization of the routing solution and to highlight multi-objective and context-dependent characteristics of the vehicle routing problem.

Area 3 - Information Systems Analysis and Specification

Full Papers
Paper Nr: 15
Title:

We Need to Discuss the Relationship - An Analysis of Facilitators and Barriers of Software Ecosystem Partnerships

Authors:

George Valença and Carina Alves

Abstract: Software ecosystems are a promising paradigm to develop and market software systems by means of partnerships among companies. To ensure the healthy evolution of software ecosystems, companies must define strategies that strength their partnerships. In this paper, we investigate the factors that drive the evolution of software ecosystems formed by Small-to-Medium Enterprises. We present an exploratory case study of two emergent software ecosystems in order to analyse the main facilitators and barriers faced by participating companies. We adopt System Dynamics approach to create models expressing causal relations among these factors. By understanding the facilitators that should be reinforced and barriers that should be restrained, we believe that partners are better equipped to catalyse the success of their software ecosystems.

Paper Nr: 35
Title:

The Influence of Software Product Quality Attributes on Open Source Projects: A Characterization Study

Authors:

Antonio Cesar Brandao Silva, Kattiana Constantino, Glauco de Figueiredo Carneiro, Antonio Carlos M. de Paula and Eduardo Figueiredo

Abstract: Several Open Source Software (OSS) projects have adopted frequent releases as a strategy to deliver both new features and fix bugs on time. These entails express requests from the project’s community, registered as issues in bug repositories by active users and developers. Each OSS project has its own priorities established by their respective communities. A still open question is to what extent these priorities influence selection of the issues that should be tackled first, implemented/solved and delivered in subsequent releases. In this paper, we present an exploratory study on the influence of target product quality attributes in software release practices of OSS projects. The goal is to search for evidence that clarify the relationships between target attributes, priorities assigned to the registered issues and the ways they are delivered by product releases. To this end, we asked a set of participants to identify these attributes through the data analysis of repositories of three well-known OSS projects: Libre Office, Eclipse and Mozilla Firefox. Evidence provided by the participants suggest that OSS community developers use criteria/priorities driven by specific software product quality attributes to plan and perform software releases.

Paper Nr: 69
Title:

Application of Heuristics in Business Process Models to Support Software Requirements Specification

Authors:

Fernando Aparecido Nogueira and Hilda Carvalho de Oliveira

Abstract: Requirements Engineering has suffered difficulties caused by communication failures between business and Information Technology teams (IT). The knowledge of the enterprise's business domain is very important for the systems analysts when developing software solutions to automate activities and processes. However, these analysts come across frequent changes in the system scope and requirements descriptions incomplete or erroneous. This work presents a systematic process that takes into account the business process models to automatically extract functional and non-functional requirements, which compose the Software Requirements Specification document. This automatic process uses requirements heuristics implemented by a freeware software system that generates software requirements documents with use cases and UML diagrams. The systematic process uses XML to facilitate systems integration, as well the re-use and visualization of the results. Additionally, this work presents business heuristics that enable significant improvements in the documentation of the business process models, bringing advantages for the business and IT. Assessment tools for the level of documentation level are proposed for both the business process models as for software requirements documents.

Paper Nr: 112
Title:

RSLingo4Privacy Studio - A Tool to Improve the Specification and Analysis of Privacy Policies

Authors:

André Ribeiro and Alberto Rodrigues da Silva

Abstract: Popular software applications collect and retain a lot of users’ information, part of which is personal and sensitive. To assure that only the desired information is made public, these applications have to define and publish privacy policies that describe how they manage and disclose this information. Problems arise when privacy policies are misinterpreted, for instance because they contain ambiguous and inconsistent statements, what results in a defective application of the policy enforcement mechanisms. The RSLingo4Privacy approach aims to improve the specification and analysis of such policies. This paper presents and discusses its companion tool, the RSLingo4Privacy Studio, which materializes this approach by providing the technological support for users being able to specify, analyze and publish policies based on the RSL-IL4Privacy domain specific language. We validated its feasibility using popular websites policies such as Dropbox, Facebook, IMDB, LinkedIn, Twitter and Zynga. We conclude this paper with a discussion of the related work, namely a comparative analysis of pros and cons of RSLingo4Privacy Studio with other previous proposals.

Paper Nr: 115
Title:

Identifying Possible Requirements using Personas - A Qualitative Study

Authors:

Bruna Ferreira

Abstract: Involving end users in requirements elicitation helps to generate applications with a positive usage experience. However, in the current context of software development, having users involved in the requirements elicitation is difficult due to factors such as: lack of paying customers; users’ unavailability for validations; and budget and time restrictions for applying traditional techniques, such as interviews and questionnaires. Therefore, using alternative techniques to understand and gather users’ needs is indicated. Persona is a technique that was created to help understanding users, their characteristics and what they expect from an application. The technique allows describing users’ profile and understanding their characteristics, attitudes and behaviors. However, the description of a persona may have many irrelevant information not generating requirements for the development of the application. Therefore, we proposed the PATHY technique to support the creation of personas and to generate more focused descriptions regarding specific requirements identification for a particular application. This paper presents a study using the PATHY technique (version 2.0). This empirical study aims to evaluate the quality of the possible software requirements the technique helps identifying, while gathering the participants’ feedback on using the PATHY technique. The results show that PATHY supports creating personas’ descriptions that lead to the identification of requirements to help in application design according to target users’ characteristics and preferences.

Paper Nr: 123
Title:

How have Software Engineering Researchers been Measuring Software Productivity? - A Systematic Mapping Study

Authors:

Edson Oliveira and Davi Viana

Abstract: Context: productivity has been a recurring topic, and despite its importance, researchers have not yet reached a consensus on how to properly measure productivity in software engineering. Aim: to investigate and better understand how software productivity researchers are using software productivity metrics. Method: we performed a systematic mapping study on publications regarding software productivity, extracting how software engineering researchers are measuring software productivity. Results: In a total of 91 software productivity metrics were extracted. The obtained results show that researchers apply these productivity metrics mainly focusing on software projects and developers, and these productivity metrics are predominantly composed by Lines of Code (LOC), Time and Effort measures. Conclusion: although there is no consensus, our results shows that single ratio metrics, such as LOC/Effort, for software projects, and LOC/Time, for software developers, are a tendency adopted by researchers to measure productivity.

Paper Nr: 164
Title:

Enterprise Knowledge Graphs: A Semantic Approach for Knowledge Management in the Next Generation of Enterprise Information Systems

Authors:

Mikhail Galkin and Sören Auer

Abstract: In enterprises, Semantic Web technologies have recently received increasing attention from both the research and industrial side. The concept of Linked Enterprise Data (LED) describes a framework to incorporate benefits of Semantic Web technologies into enterprise IT environments. However, LED still remains an abstract idea lacking a point of origin, i.e., station zero from which it comes to existence. We devise Enterprise Knowledge Graphs (EKGs) as a formal model to represent and manage corporate information at a semantic level. EKGs are presented and formally defined, as well as positioned in Enterprise Information Systems (EISs) architectures. Furthermore, according to the main features of EKGs, existing EISs are analyzed and compared using a new unified assessment framework. We conduct an evaluation study, where cluster analysis allows for identifying and visualizing groups of EISs that share the same EKG features. More importantly, we put our observed results in perspective and provide evidences that existing approaches do not implement all the EKG features, being therefore, a challenge the development of these features in the next generation of EISs.

Paper Nr: 165
Title:

A Task-oriented Requirements Engineering Method for Personal Decision Support Systems - A Case Study

Authors:

Christian Kücherer and Barbara Paech

Abstract: [Context and motivation] Personal Decision Support Systems (PDSSs) are information systems which support executives in decision-making by a decision- and user-specific data presentation. PDSSs operate on current data with predefined queries and provide a rich user interface (UI). Thus, a Requirements Engineering (RE) method for PDSSs should support the elicitation and specification of detailed requirements for specific decisions. However, existing RE approaches for decision support systems typically focus on ad-hoc decisions in the area of data warehouses. [Question/problem] Task-oriented RE (TORE) emphasizes a comprehensive RE specification which covers stakeholders’ tasks, data, system functions, interactions, and UI. TORE allows an early UI prototyping which is crucial for PDSS. Therefore, we want to explore TORE’s suitability for PDSSs. [Principal ideas/results] According to the Design Science methodology, we assess TORE for its suitability of PDSS specification in a problem investigation. We propose decision-specific adjustments of TORE (DsTORE), which we evaluate in a case study. [Contribution] The contribution of this paper is threefold. First, the suitability of the task-oriented RE method TORE for the specification of a PDSS is investigated as problem investigation. Second, a decision-specific extension of TORE is proposed as the DsTORE-method in order to identify and specify details of decisions to be supported by a PDSS. DsTORE is evaluated in a case study. Third, experiences from the study and method design are presented.

Paper Nr: 195
Title:

Feature Model based on Design Pattern for the Service Provider in the Service Oriented Architecture

Authors:

Akram Kamoun and Mohamed Hadj Kacem

Abstract: In Service Oriented Architecture (SOA), service contracts are widely used for designing and developing the features (e.g., services and capabilities) of Service Providers (SPs). Two of the most widely used traditional service contracts in SOA are: WSDL and WADL. We identify that these service contracts suffer from several problems, like: they only work for SOAP and REST communication technologies and do not rely on modeling SOA Design Patterns (DPs). One benefit of using SOA DPs is that they permit developing proven SPs for different platforms. In order to overcome these problems, we introduce a new DP-based Feature Model (FM), named FMSP, as a service contract that models the variability of SP features including 15 SOA DPs (e.g., Event-driven messaging DP) and their corresponding constraints. This permits to easily identify and develop valid SOA compound DPs. We demonstrate, through a practical case study and a developed tool, that our FMSP allows to automatically generate fully functional, valid, highly customized and DP-based SPs. We also show that our FMSP reduces the required effort and time to develop SPs.

Paper Nr: 197
Title:

A TOSCA-based Programming Model for Interacting Components of Automatically Deployed Cloud and IoT Applications

Authors:

Michael Zimmermann

Abstract: Cloud applications typically consist of multiple components interacting with each other. Service-orientation, standards such as WSDL, and the workflow technology provide common means to enable the interaction between these components. Nevertheless, during the automated application deployment, endpoints of interacting components, e.g., URLs of deployed services, still need to be exchanged: the components must be wired. However, this exchange mainly depends on the used (i) middleware technologies, (ii) programming languages, and (iii) deployment technologies, which limits the application’s portability and increases the complexity of implementing components. In this paper, we present a programming model for easing the implementation of interacting components of automatically deployed applications. The presented programming model is based on the TOSCA standard and enables invoking components by their identifiers and interface descriptions contained in the application’s TOSCA model. The approach can be applied to Cloud and IoT applications, i.e., also software hosted on physical devices may use the approach to call other application components. To validate the practical feasibility of the approach, we present a system architecture and prototype based on OpenTOSCA.

Paper Nr: 201
Title:

Software Testing Process in a Test Factory - From Ad hoc Activities to an Organizational Standard

Authors:

Rossana Maria de Castro Andrade, Ismayle de Sousa Santos and Valéria Lelli

Abstract: Software testing is undoubtedly essential for any software development. However, testing is an expensive activity, usually costing more than 50% of the development budget. Thus, to save resources while performing tests with high quality, many software development companies are hiring test factories, which are specialized enterprises for the delivery of outsourced testing services for other companies. Although this kind of organization is common in the industry, we have found few empirical studies concerning test factories. In this paper, we report our experience in the definition, use, and improvement of a software testing process within a test factory. To support the implantation of the test factory, we applied the PDCA (Plan-Do-Check-Act) cycle using the lessons learned in the PDCA check phase to improve the testing process. As a result, we have decreased the number of failures found after the software delivery and thus a higher value for DRE (Defect Removal Efficiency) measure. We also present 12 lessons learned that may be applicable by other test factories.

Paper Nr: 226
Title:

Security Requirements for Smart Toys

Authors:

Luciano Gonçalves de Carvalho and Marcelo Medeiros Eler

Abstract: Toys are an essential part of our culture, and they evolve as our technology evolves. Smart toys have been recently introduced in our market as conventional toys equipped with electronic components and sensors that enable wireless network communication with mobile devices that provide services to enhance the toy's functionalities. This environment, also called toy computing, provides users with a more sophisticated and personalised experience since it collects, processes and stores personal information to be used by mobile services and the toy itself. On the other hand, it raises concerns around information security and child safety because unauthorized access to confidential information may bring many consequences. In fact, several security flaws in toy computing have been recently reported in the news due to the absence of clear security policies in this new environment. In this context, this paper presents an analysis of the toy computing environment based on the Microsoft Security Development Lifecycle and its threat modelling tool with the aim of identifying a minimum set of security requirements a smart toy should meet. As result we identified 15 threats and 20 security requirements for toy computing.

Paper Nr: 273
Title:

An Ontology-based Approach to Analyzing the Occurrence of Code Smells in Software

Authors:

Luis Paulo da Silva Carvalho and Renato Novais

Abstract: Code Smells indicate potential flaws in software design that can lead to costly consequences. To mitigate the bad effects of Code Smells, it is necessary to detect and fix defective code. Programmatic processing of Code Smells is not new. Previous works have focused on detection and representation to support the analysis of faulty software. However, such works are based on a syntactic operation, without taking advantage on semantic properties of the software. On the other hand, there are several ways to provide semantic support in software development as a whole. Ontologies, for example, have recently been usedl. The application of ontologies for inferring semantic mechanisms to aid software engineers in dealing with smells may be of great value. As little attention has been given to this, we propose an ontology-based approach to analyze the occurrence of Code Smells in software projects. First, we present a comprehensive ontology that is capable of representing Code Smells and their association with software projects. We also introduce a tool that can manipulate our ontology in order to provide processing of Code Smells as it mines software source-code. Finally, we conducted an initial evaluation of our approach in a real usage scenario with two large open-source software repositories.

Paper Nr: 276
Title:

IFactor-KM: A Process for Supporting Knowledge Management Initiatives in Software Organizations Considering Influencing Factors

Authors:

Jacilane Rabelo and Tayana Conte

Abstract: Knowledge management has become a real need in the software industry. Knowledge management factors refer to management and organizational factors that need to be addressed effectively in order to increase the chances of a knowledge management successful implementation. Many organizations have questions about the approach they should take in their knowledge management initiatives. Literature studies have been conducted to identify the factors that affect the implementation of a knowledge management, but do not suggest knowledge management practices for organizations. A process named IFactor-km (influencing factors on knowledge management) was created to address these needs. The goal of this process is to support knowledge management initiatives and to suggest knowledge management practices for software organizations considering the following influencing factors: people, leadership and culture. The IFactor-KM supports software organizations by: a) identifying the knowledge management objectives; b) checking how tacit knowledge is shared; c) showing the knowledge experts; d) understanding leadership and people aspects; e) characterizing the organizational culture profile; and f) suggesting knowledge management practices. The process is composed of: i) a procedure detailing the steps of the process; and ii) a set of artifacts detailing how to use the process and examples of completed artifacts to facilitate the use of the process.

Paper Nr: 284
Title:

Application of Memetic Algorithms in the Search-based Product Line Architecture Design: An Exploratory Study

Authors:

João Choma Neto

Abstract: Basic design principles, feature modularization, and SPL extensibility of Product Line Architecture (PLA) design have been optimized by multi-objective genetic algorithms. Until now, memetic algorithms have not been used for PLA design optimization. Considering that memetic algorithms (MA) have achieved better quality solutions than the solutions obtained by genetic algorithms (GA) and that previous study involving the application of design patterns to PLA design optimization returned promising results, bringing the motivation in investigating the use of MA and the Design Pattern Search Operator as local search to the referred context. This work presents an exploratory study aimed to characterize the application of using MA in PLA design optimization. When compared with a GA approach, the results show thatMAare promising, since the obtained solutions are slightly better than solutions found by the GA. A pattern application rate was identified in about 30 % of the solutions obtained by MA. However, the qualitative analysis showed that the existing global search operators need to be refactored for the joint use with the MA approach.

Paper Nr: 297
Title:

An Integrated Inspection System for Belt Conveyor Rollers - Advancing in an Enterprise Architecture

Authors:

Richardson Nascimento, Regivaldo Carvalho, Saul Delabrida and Andrea G. C. Bianchi

Abstract: One of the most critical equipment used by mining companies is the belt conveyor. Thousands of kilometers of these elements are used for bulk material transportation. A belt conveyor system is composed of several components, and the maintenance process is not trivial and usually reactive. Thousands of dollars are lost per hour with the failure of a conveyor belt system. This occurs due to the lack of appropriate mechanisms for efficient monitoring and integration of this process to the enterprise systems. This paper presents a novel monitoring and integration architecture for a Brazilian mining company. The challenge is to provide a mobile control system and its integration with the current enterprise solutions. We also describe a set of restrictions for the particular component (rollers) in order to identify methods for the integration. Preliminary results demonstrate our solution is a feasible alternative for the case study.

Short Papers
Paper Nr: 39
Title:

Building the Monitoring Systems for Complex Distributed Systems: Problems and Solutions

Authors:

Olga Korableva

Abstract: Complex distributed systems are of more significance nowadays, due to a broader range of its use and because of provision of better services to users. It is clear, that system health needs continuous monitoring, while running software apps that are ensuring the implementation of the business processes, working with Big Data, etc. In the course of this study a monitoring system has been developed. It meets all modern requirements, such as scalability, flexibility, comprehensiveness of necessary data and ease of use. In order to identify unified problems in development of monitoring systems for complex distributed systems and respectively - the solutions for their elimination, the data regarding IT-architecture most common types used in modern companies, related to fault-points of business apps has been gathered and analysed. All of identified problems and optimal solutions to eliminate them were aggregated in line with the three development stages of monitoring system, such as: development of servicing model for system-of-interest, implementing tools to detect objects of monitoring, generating a health map of system-of-interest. In order to develop monitoring systems for complex distributed systems taking into account the architecture of these systems, all of the gathered data was analysed, and we articulated all problems and optimal solutions for their elimination as well.

Paper Nr: 40
Title:

DataFlow Analysis in BPMN Models

Authors:

Anass Rachdi

Abstract: Business Process Management and Notation (BPMN) is the defacto standard used in enterprises for modeling business processes.However, this standard was not provided with a formal semantics, which makes the possibility of analysis limited to informal approaches such as observation. While most of the existing formal approaches for BPMN models verification focus on the control-flow, only few has treated the data-flow angle. The latter is important since the correct execution of activities in BPMN models is based on data’s availability and correctness. In this paper, we present a new approach that uses the DataRecord concept, adapted for the BPMN standard. The main advantage of our approach is that it locates the stage where the data flow anomaly has taken place as well as the source of data flow problem. Therefore the designer can easily correct the data flow anomaly.The model’s data flow problems are detected using an algorithm specific for the BPMN standard.

Paper Nr: 41
Title:

Towards a New Conceptualization of Information System Benefits Assessment

Authors:

Sylvain Goyette and Luc Cassivi

Abstract: Different perspectives on benefit evaluation are presented in the information technology literature, from the perceptual assessment of benefits to the financial calculation of return on investment. This study aims to complement the literature by integrating the IT capital expense literature and Delone and McLean’s (2003) information systems success model. A model was developed using a qualitative approach with respondents from three manufacturing organizations responsible for the information system evaluation process. The five-stage model is composed of project identification, proposal development, proposal selection, IS creation/use and organizational benefit evaluation. This conceptualization adds a new and enriched perspective to the literature by integrating financial and perceptual benefit assessment with an organizational assessment process. The analysis of the data collected confirmed the inefficiency of user perceptions for organizational success assessment but also revealed top management perceptions to be a critical factor in the evaluation process.

Paper Nr: 60
Title:

Model-driven Development of User Interfaces for IoT Systems via Domain-specific Components and Patterns

Authors:

Marco Brambilla and Eric Umuhoza

Abstract: Internet of Things technologies and applications are evolving and continuously gaining traction in all fields and environments, including homes, cities, services, industry and commercial enterprises. However, still many problems need to be addressed. For instance, the IoT vision is mainly focused on the technological and infrastructure aspect, and on the management and analysis of the huge amount of generated data, while so far the development of front-end and user interfaces for IoT has not played a relevant role in research. On the contrary, user interfaces in the IoT ecosystem they can play a key role in the acceptance of solutions by final adopters. In this paper we present a model-driven approach to the design of IoT interfaces, by defining a specific visual design language and design patterns for IoT applications, and we show them at work. The language we propose is defined as an extension of the OMG standard language called IFML.

Paper Nr: 87
Title:

Using Linear Logic to Verify Requirement Scenarios in SOA Models based on Interorganizational WorkFlow Nets Relaxed Sound

Authors:

Kênia Santos de Oliveira

Abstract: This paper presents a method for requirement verification in Service-Oriented Architecture (SOA) models based on Interorganizational WorkFlow nets which are not necessarily deadlock-freeness. In this method, a requirement model corresponds to a public model that only specify tasks which are of interest of all parties involved. An architectural model is considered as a set of private processes that interact through asynchronous communication mechanisms in order to produce the services specified in the corresponding requirement model. Services can be seen as scenarios of WorkFlow nets. For each scenario that exists in the requirement model a proof tree of Linear Logic can be produced, and for each scenario correctly finalized, a precedence graph that specifies the task sequence can be derived. For each scenario of the architectural model, similar precedence graphs can be produced. The precedence graphs of the requirement and architectural model are then compared in order to verify if all existing scenarios of the requirement model also exist at the architectural model level. The comparison of behavior between distinct discrete events models is based on the notion of branching bisimilarity that prove behavioral equivalence between distinct finite automata. The example used to illustrate the proposed approach, shows that the method can be effective to identify if a SOA-based system satisfy the business needs specified by a model of public requirements.

Paper Nr: 88
Title:

Deriving Domain Functional Requirements from Conceptual Model Represented in OntoUML

Authors:

Joselaine Valaski

Abstract: A conceptual model is an artifact that helps to understand a domain and therefore, may contribute with the elicitation of related functional requirements. However, the expressiveness of this model depends on the expressiveness of the language used. Considering that OntoUML is a language that proposes elements that allow more semantics, it is possible to build models with better expressiveness which are more complete than, for instance, models represented in UML language. For evaluating the possibility of deriving domain functional requirements (DFR) from models represented in OntoUML, a heuristic was proposed. This heuristic was obtained by reading and interpreting nine conceptual models represented in OntoUML. Once the heuristic was obtained, it was applied in a systematized manner to six models. According to the results obtained, using a conceptual model represented in OntoUML as a source to derive DFR is possible. In addition to the identification of the DFR, the heuristic can identify possible faults in the model design, or even the incompleteness of the model.

Paper Nr: 96
Title:

When Agile Meets Waterfall - Investigating Risks and Problems on the Interface between Agile and Traditional Software Development in a Hybrid Development Organization

Authors:

Rob J. Kusters and Youri van de Leur

Abstract: This paper aims to map issues (risks and problems) at the interface of agile and traditional development approaches in hybrid organizations which have an impact on coordination and cooperation. Successfully combining agile and traditional development methods appears to be quite a challenge for many hybrid organizations. Both methods have their own strengths and added value but also bring their own culture and conditions. Combining these can lead to problems. If we want to handle such problems, we first need to understand the issues that can cause such problems. This study is aimed at identifying and validating an overview of these issues. Based on an exploration of literature a preliminary overview of issues was derived. These were classified into a coherent set. The result was validated in a case study within a large financial institute in the Netherlands. The resulting list of twenty-four issues can be used as a starting point for handling the problem area.

Paper Nr: 100
Title:

Decision Criteria for Software Component Sourcing - An Initial Framework on the Basis of Case Study Results

Authors:

Jos. J. M. Trienekens and Rob Kusters

Abstract: Software developing organizations nowadays have a wide choice when it comes to sourcing software components. This choice ranges from developing or adapting in-house developed components via buying closed source components to utilizing open source components. This study aims at structured decision support in this type of decision. As a basis for this study an initial set of criteria is taken, that has been identified and validated in a particular software development environment in a previous study on the subject (Kusters et al, 2016). In the paper at hand we report on the results of the application and validation of the initial set of sourcing criteria in a completely different case study environment, namely a software environment in that. medical embedded software is being developed. In addition, and based on the outcomes of our case study, a further step is made towards structured decision support for sourcing decisions by the development of an initial sourcing criteria framework.

Paper Nr: 105
Title:

Development of an Electronic Health Record Application using a Multiple View Service Oriented Architecture

Authors:

Joyce M. S. França

Abstract: Service-Oriented Architecture has been widely adopted in several domains in past years with the purpose of developing distributed applications. Within the health domain, integration of legacy systems by means of web services has been applied in order to develop complex applications. However, few approaches in this area treat complexity by delineating a software architecture to which applications must conform. In most cases in the literature, SOA applications in the health domain are documented using only one or two architecture views. This paper proposes a multiple view Service Oriented Architecture which is the basis for development of an Electronic Health Record (EHR) application. In order to develop the EHR, new requirements as well as current functionalities obtained from integrating legacy systems by means of web services were considered in a cohesive approach. Four architecture views, Scenarios, Business Process, Implementation and Logical View are presented. Each architecture view addresses one specific concern that organizes important concepts, facilitates understanding the system, and improves possibilities of communication between stakeholders. As a result, important software principles such as separation of concerns, component-based development and modularity are considered for development and integration of legacy systems in order to develop the EHR application to be deployed in a public hospital.

Paper Nr: 108
Title:

A Linear Logic based Synchronization Rule for Deadlock Prevention in Web Service Composition

Authors:

Vinícius Ferreira de Oliveira and Stéphane Julia

Abstract: This paper presents a prevention method for deadlock situations in Web Services composition. This method considers the Petri net theory and is based on the analysis of Linear Logic proof trees. Initially, it is necessary to detect deadlock scenarios by analyzing the Linear Logic proof trees built for each different scenario of the modules from which the composed system is built. Following on from this, a synchronization rule is proposed in order to prevent deadlock situations in these deadlock scenarios. The basic principle of such a rule is to force workflow modules to execute specific tasks respecting a local scheduling policy in order to remove the situations responsible for the deadlocks. This paper therefore presents a synchronization strategy to prevent deadlock situations in Web Services composition that are deadlock-free within the local workflow modules but not necessarily deadlock-free when considering the entire composed system.

Paper Nr: 113
Title:

An Empirical Evaluation of Requirements Elicitation from Business Models through REMO Technique

Authors:

Jônatas Medeiros de Mendonça, Pedro Garcêz de Moura, Weudes Evangelista, Hugo Martins, Rafael Reis, Edna Dias Canedo and Rodrigo Bonifácio

Abstract: The Requirements Elicitation oriented by business process MOdeling (REMO) technique presents a set of heuristics to support the elicitation of requirements based on business process models. Although empirically validated only in controlled environments, the literature does not report evidence regarding the applicability of the technique in real scenarios. In this context, this paper presents an empirical evaluation applied in an industrial settings, using a multimethod approach, where a quantitative analysis measured the applicability of the technique and a qualitative analysis the utility and ease of use according to requirements analysts. As for the results, the quantitative analysis made it clear that the REMO technique can bring real benefit in the context of the study, identifying a higher number of funcional requirements than the conventional approach (without the support of the REMO technique). This benefit is reached without overcomplicating the task of eliciting requirements.

Paper Nr: 130
Title:

A Software Process Line for Combinational Creativity-based Requirements Elicitation

Authors:

Rafael Pinto and Lyrene Silva

Abstract: The need for innovation and appreciation of creative solutions has driven requirements engineering researchers to investigate creativity techniques to elicit useful and unique requirements. Some techniques are based on the combination of ideas (requirements, words or problems) that generally come from different sources and are carried out in a process that involves different roles. However, how can we identify the common core and which variations can be adapted to the organizational context where the technique will be used? This article presents a Software Process Line (SPrL) to elicit requirements based on combinational creativity. This SPrL represents commonalities and variabilities found in some combinational creativity techniques thereby it helps teams to define the combinational technique according their organizational context. We validate this approach by discussing how the SPrL is aligned with three techniques that have already been used in experimental studies and produced satisfactory results.

Paper Nr: 146
Title:

An Acceptance Empirical Assessment of Open Source Test Tools

Authors:

Natasha M. Costa Valentim, Adriana Lopes, Edson César and Tayana Conte

Abstract: Software testing is one of the verification and validation activities of software development process. Test automation is relevant, since manual application of tests is laborious and more prone to error. The choice of test tools should be based on criteria and evidence of their usefulness and ease of use. This paper presents an acceptance empirical assessment of open source testing tools. Practitioners and graduate students evaluated five tools often used in the industry. The results describe how these tools are perceived in terms of ease of use and usefulness. These results can support software practitioners in the process of choosing testing tools for their projects.

Paper Nr: 200
Title:

A Literature Review of Benefit Analysis Approaches for IT Projects in the Public Sector

Authors:

Oscar Avila

Abstract: The financial investments of public institutions in Information Technology (IT) projects have considerably increased in recent years. This has resulted in an enormous pressure on IT managers of these institutions to find ways to analyse the benefits of these investments. However, this task is very difficult because, in the public sector, IT projects do not only generate financial benefits but also create public value through the generation of other types of benefits, such as social and political. In addition, in this sector there is a variety of beneficiaries and manners of measure the benefits. In this context, this work presents a survey aimed at analysing the research literature to analyse what are the main types of benefits and beneficiaries as well as analysing methods. From the literature review, the paper presents a conceptual model that aims at establishing the basis for a complete approach for benefit analysis in the public sector.

Paper Nr: 207
Title:

Modelling Enterprise Applications using Business Artifacts

Authors:

Vladimír Kovář

Abstract: Most of the existing modeling languages such as UML, BPML, etc. that attempt to capture the semantics of real-world objects produce complex technical models that are not suitable for business professionals. Another important limitation of traditional modeling approaches is the lack of a mechanism for modeling the lifecycle of business objects. These limitations have motivated recent interest in alternative approaches such as business artefact modelling that provide a unified representation of data and processes in the form of business artifacts. In this paper we describe the Unicorn Universe Process method that uses business artifacts as a fundamental building block of information systems. We illustrate the application of this method using a University Assignment Submission case study scenario.

Paper Nr: 213
Title:

Knowledge Transfer in IT Service Provider Transition

Authors:

Emilie de Morais, Geovanni de Jesus and Rejane Figueiredo

Abstract: Although outsourcing Information Technology (IT) services brings benefits, it might cause loss of knowledge and dependency of the contractor in relation to the service providers. When a transition occurs between providers, knowledge sharing is essential for the contractor and to the new contracted. This paper presents the proposal, execution and evaluation of a knowledge transfer process to the transition phase of a contract. A case study was conducted in a Brazilian Federal Government Organization during a contractual transition. Data collected from two projects allowed the evaluation of the proposed process. It was observed that the training activities performed, and the granting of access to process information and to the services to be transferred were essential to an effective knowledge transfer. Although this work is a study of a single case, it was observed that the process could be generalized to service provider transitions involving contracting organizations of IT services.

Paper Nr: 214
Title:

Semantic Mutation Test to OWL Ontologies

Authors:

Alex Mateus Porn and Leticia Mara Peres

Abstract: Ontologies are structures used to represent a specific knowledge domain. There is not a right way of defining an ontology, because its definition depends on its purpose, domain, abstraction level and a number of ontology engineer choices. Therefore, a domain can be represented by distinct ontologies in distinct structures and, consequently, they can have distinct results when classifying and querying information. In light of this, faults can be accidentally inserted during its development, causing unexpected results. In this context, we propose semantic mutation operators and apply a semantic mutation test method to OWL ontologies. Our objective is to reveal semantic fault caused by poor axiom definition automatically generating test data. Our method showed semantic errors which occurred in OWL ontology constraints. Eight semantic mutation operators were used and we observe that is necessary to generate new semantic mutation operators to address all OWL language features.

Paper Nr: 225
Title:

Combining Behaviour-Driven Development with Scrum for Software Development in the Education Domain

Authors:

Pedro Lopes de Souza, Antonio Francisco do Prado and Wanderley Lopes de Souza

Abstract: Most of the Brazilian universities employ teaching-learning methodologies based on classic frontal lectures. The Medicine Programme of the Federal University of S˜ao Carlos (UFSCar) is an exception, since it employs active learning methodologies. The Educational and Academic Management System for Courses Based on Active Learning Methodologies (EAMS-CBALM) was built and it is currently used to support this programme, and has been made available for other programmes as well. This system was developed using Scrum, but during its development project it was often necessary to reconsider system behaviour scenarios, and consequently the product backlog items, mainly due to poor communication between the Product Owner (PO) and the development team. This paper discusses a case study in which Behaviour-Driven Development (BDD) has been used in combination with Scrum to redesign some EAMS-CBALM components. The paper demonstrates that the communication between the PO and the development team can be improved by using BDD as a communication platform to unambiguously define system requirements and automatically generate test suites.

Paper Nr: 237
Title:

Evaluating the Accuracy of Machine Learning Algorithms on Detecting Code Smells for Different Developers

Authors:

Mário Hozano and Nuno Antunes

Abstract: Code smells indicate poor implementation choices that may hinder the system maintenance. Their detection is important for the software quality improvement, but studies suggest that it should be tailored to the perception of each developer. Therefore, detection techniques must adapt their strategies to the developer’s perception. Machine Learning (ML) algorithms is a promising way to customize the smell detection, but there is a lack of studies on their accuracy in detecting smells for different developers. This paper evaluates the use of ML-algorithms on detecting code smells for different developers, considering their individual perception about code smells. We experimentally compared the accuracy of 6 algorithms in detecting 4 code smell types for 40 different developers. For this, we used a detailed dataset containing instances of 4 code smell types manually validated by 40 developers. The results show that ML-algorithms achieved low accuracies for the developers that participated of our study, showing that are very sensitive to the smell type and the developer. These algorithms are not able to learn with limited training set, an important limitation when dealing with diverse perceptions about code smells.

Paper Nr: 266
Title:

Architecture for Privacy in Cloud of Things

Authors:

Luis Pacheco

Abstract: A large number of devices are connected to the internet through the Internet of Things (IoT) paradigm, resulting in a huge amount of produced data. Cloud computing is a computing paradigm currently adopted to process, store and provide access control to these data. This integration is called Cloud of Things - CoT and is useful in personal networks, like residential automation and health care, since it facilitates the access to the information. Although this integration brings benefits to the users, it introduces many security challenges since the information leaves the user control and is stored at the cloud providers. Particularly interesting, in order for these technologies to be adopted, it is important to provide protocols and mechanisms to preserve the users privacy when storing their data in the cloud. In this context, this paper proposes an architecture for privacy in Cloud of Things, which allows the users to fully control the access to the data generated by the devices of their IoT networks and stored in the cloud. The proposed architecture enables a fine grained control over data, since the privacy protocols and controls are executed at the IoT devices instead of at the network border by a gateway, which also could represent a single point of failure or a component that could impair the security properties of the system once it is compromised by a successful attack.

Paper Nr: 269
Title:

Improving Confidence in Experimental Systems through Automated Construction of Argumentation Diagrams

Authors:

Clément Duffau

Abstract: Experimental and critical systems are two universes that are more and more tangling together in domains such as bio-technologies or aeronautics. Verification, Validation and Accreditation (VV&A) processes are an everyday issue for these domains and the large scale of experiments needed to work out a system leads to overhead issues. All internal V&V has to be documented and traced to ensure that confidence in the produced system is good enough for accreditation organism. This paper presents and proposes a practical approach to automate the construction of argumentation systems based on empirical studies in order to represent the reasoning and improve confidence in such systems. We illustrate our approach with two use cases, one in the biomedical field and the other one in machine learning workflow domain.

Paper Nr: 274
Title:

Quality Attributes Analysis in a Crowdsourcing-based Emergency Management System

Authors:

Ana Maria Amorim, Glaucya Boechat and Renato Novais

Abstract: In an emergency situation where the physical integrity of people is at risk, a mobile solution should be easy to use and trustworthy. In order to offer a good user experience and to improve the quality of the app, we should evaluate characteristics of usability, satisfaction, and freedom from risk. This paper presents an experiment whose objective is to evaluate quality attributes in a crowdsourcing-based emergency management system. The quality attributes evaluated are: appropriateness recognisability, user interface aesthetics, usefulness, trust, and health and safety risk mitigation. The experiment was designed following the Goal/Question/Metric approach. We could evaluate the app with experts from the area of emergency. The results showed that the participants thought the app was well designed, easy to understand, easy to learn, and easy to use. This evaluation ensured the application improvement, and also the evaluation process adopted.

Paper Nr: 279
Title:

Improving the Assesment of Advanced Planning Systems by Including Optimization Experts' Knowledge

Authors:

Melina Vidoni

Abstract: Advanced Planning Systems (APS) are core for many production companies that require the optimization of its operations using applications and tools such as planning, scheduling, logistic, among others. Because of this, process optimization experts are required to develop those models and, therefore, are stakeholders for this system's domain. Since the core of the APSs are models to improve the company performance, the knowledge of this group of stakeholders can enhance the APS architecture evaluation. However, methods available for this task require participants with extensive Software Engineering (SE) understanding. This article proposes a modification to ATAM (Architecture Trade-off Analysis Method) to include process optimization experts during the evaluation. The purpose is to create an evaluation methodology centred on what these stakeholders value the most in an APS, to capitalize their expertise on the area and obtain valuable information and assessment regarding the APS, models and solvers interoperability.

Paper Nr: 288
Title:

Organizational Training within Digital Transformation: The ToOW Model

Authors:

Maria João Ferreira

Abstract: Information systems and technologies (IST) are the essence of up-to date organizations, and changes in this field are occurring at an uncontrollable pace, interrupting traditional business models and forcing organisations to implement new models of business. Social media tools represent a subset of these technologies that contribute to digital transformation (DT) of organizations and which are already used within organizational training contexts. However, the adoption of social media tools, by itself, does not guarantee such a transformation; changes in the organization’s culture and behaviour are also needed. Taking advantage of the DT technology enablers and realising the need for updated approaches to address organizational workers’ training, we propose a model to guide organizational training within DT. The model, called Training of Organizational Workers (ToOW), addresses the 2nd layer of the mobile Create, Share, Document and Training (m_CSDT) framework formerly described. The advantages foreseen for the model usage are two-fold. On the one hand, the model acknowledges the crucial role that an organization plays in promoting a culture of continuous learning/training of its employees; on the other hand, the model provides guidance on setting up the training strategies and activities, as well as on the monitoring of training results achieved, which are measured according to the performance metrics considered within the organizational strategy.

Paper Nr: 318
Title:

Semantic Enrichment and Verification of Feature Models in DSPL

Authors:

Thalisson Oliveira

Abstract: Dynamic Software Product Lines (DSPLs) support the development of context-aware systems, which use context information to perform adapted services aiming to satisfy user’s needs. Feature models (FM) represent system similarities and variability in DSPL. However, some FM representations are limited in expressiveness. For example, relevant domain aspects (e.g., context-aware feature that implements a particular use case) are not described in FM. This research proposes an approach based on an OWL-DL ontology to add semantics to FM. It also provides automatic verification of the correctness and consistency of these models. We implemented this approach in a feature model design tool called FixOnto. Our first evaluation results showed that the use of ontologies brings benefits such as improvements on SPL information retrieval, and inference and traceability of the features, use contexts, and SPL artifacts.

Paper Nr: 320
Title:

Crowdsourced Software Development in Civic Apps - Motivations of Civic Hackathons Participants

Authors:

Kiev Gama

Abstract: Hackathons are intensive events that typically last from 1 to 3 days, where programmers and sometimes people with interdisciplinary backgrounds (e.g., designers, journalists, activists) collaborate to develop software applications to overcome a challenge proposed by the event organizers. Civic hackathons are a particular type of hackathon that gained momentum in the last years, mainly propelled by city halls and government agencies throughout the world as a way to explore public data repositories. These initiatives became an attempt to crowdsource the development of software applications targeting civic issues. Some articles in academic literature have conflicting arguments about factors that motivate developers to create such apps. Claims are mostly based on anecdotal evidence since research is still scarce in the empirical analysis of civic hackathons. Thus, we decided to do a study to gather data under the perspective of hackathon participants, focusing on what motivation factors make them join such competitions. We conducted a survey research where we intended to provide empirical evidence for a diverse audience (e.g., hackathon organizers, open data specialists) interested in civic hackathons as a form of software crowdsourcing. In this work, we present preliminary results.

Posters
Paper Nr: 8
Title:

Instrumenting a Context-free Language Recognizer

Authors:

Paulo Roberto Massa Cereda and João José Neto

Abstract: Instrumentation plays a crucial role when building language recognizers, as collected data provide basis for achieving better performance and model improvements, thus offering a balance between time and space, as demanded by practical applications. This paper presents a simple yet functional semiautomatic approach for generating a instrumentation-aware context-free language recognizer, enhanced with hooks, from a grammar written using the Wirth syntax notation. The entire process is aided by a set of command line tools, freely available for download. We also introduce the concept of an instrumentation layer enclosing the underlying recognizer, acting as observer for each computational step and collecting data for later use.

Paper Nr: 10
Title:

A Systematic Review of Anonymous Communication Systems

Authors:

Ramzi A. Haraty

Abstract: Privacy and anonymity are important concepts in the field of communication. Internet users seek to adopt protective measures to ensure the privacy and security of the data transmitted over the network. Encryption is one technique to secure critical information and protect its confidentiality. Although there exist many encryption algorithms, hiding the identity of the sender can only be achieved through an anonymous network. Different classifications of anonymous networks exist. Latency level and system model architecture are two essential criteria. In this paper, we present a description of a set of anonymous systems including NetCamo, TOR, I2P and many others. We will show how these systems work and contrast the advantages and disadvantages of each one of them.

Paper Nr: 101
Title:

A Technique to Architect Real-time Embedded Systems with SysML and UML through Multiple Views

Authors:

Quelita A. D. S. Ribeiro

Abstract: Describing the architecture of real-time systems by means of semi-formal languages has been often considered in the literature. However, the most common approach is to propose multiple modeling languages in an orthogonal manner, i.e., the models are used in separate phases, in a totally independent way. This situation is not always possible, and the assumption in this paper is to propose a technique in which diagrams from two modeling languages are integrated. In this paper, UML and SysML are used together. Thus, the proposed technique is capable of modeling both software and system architectural elements, by satisfying the following modeling criteria: support to model components and connectors, both graphical and textual syntax, modeling non-functional requirements, design of structural view of software using UML classes, represent hardware elements in the architecture, and to describe traceability between requirements. A case study on a real-time automotive embedded system is presented to illustrate the technique.

Paper Nr: 104
Title:

Identifying Relevant Resources and Relevant Capabilities of Informal Processes

Authors:

C. Timurhan Sungur, Uwe Breitenbücher and Oliver Kopp

Abstract: Achieving goals of organizations requires executing certain business processes. Moreover, the effectiveness and the efficiency of organizations are affected by how their business processes are enacted. Thus, increasing the performance of business processes is in the interest of every organization. Interestingly, resources and their capabilities impacting past enactments of business processes positively or negatively can similarly have a positive or a negative impact in their future executions. Therefore, in our former work, we demonstrated a systematic method for identifying such resources and capabilities of business processes using interactions between resources of business processes without detailing the concepts required for this identification. In this work, we fill this gap by presenting a conceptual framework including concepts required for identifying resources possibly impacting business processes and capabilities of these resources based on their interactions. Furthermore, we present means of quantifying the significance of resources and their capabilities for business processes. To evaluate the identified resources and capabilities with their significance, we compare the results of the case study on the Apache jclouds project from our former work with the data collected through a survey. The results show that our system can estimate the actual values with 18% of a mean absolute percentage error. Last but not least, we describe how the presented conceptual framework is implemented and used in organizations.

Paper Nr: 114
Title:

Specifying the Technology Viewpoint for a Corporate Spatial Data Infrastructure using ICA's Formal Model

Authors:

Rubens Moraes Torres, Italo lopes Oliveira and Jugurta Lisboa-Filho

Abstract: In the quest to create a formal model for the development of Spatial Data Infrastructure (SDI), the International Cartographic Association (ICA) has proposed a model based on the RM-ODP framework to describe SDIs regardless of the implementation and technology. The RM-ODP framework comprises five viewpoints. The ICA has proposed the specification of the Enterprise, Computation, and Information viewpoints while the Engineering and Technology viewpoints are yet to be specified. The Companhia Energética de Minas Gerais (Minas Gerais Power Company - Cemig) develops an SDI, called SDI-Cemig, aiming to facilitate the discovery, sharing, and use of geospatial data among its employees, partner companies, and consumers. This study presents the specification of the technologies that comprise the components of SDI-Cemig using the Technology viewpoint integrated to ICA’s formal model.

Paper Nr: 119
Title:

An Empirical Analysis of the Correlation between CK Metrics, Test Coverage and Mutation Score

Authors:

Robinson Crusoé da Cruz and Marcelo Medeiros Eler

Abstract: : In this paper we investigate the correlation between test coverage, mutation score and object-oriented systems metrics. First we conducted a literature review to obtain an initial model of testability and existing object-oriented metrics related to testability. Thus we selected four open source system whose test cases were available and calculated the correlation between the metrics collected and the line coverage, branches coverage and mutation score. Preliminary results show that some CK metrics, which are strongly related to system’s design, influence mainly line coverage and mutation score, thus they can influence systems testability.

Paper Nr: 124
Title:

Public ICT Governance: A Quasi-systematic Review

Authors:

Marianne Batista Diniz Da Silva, Eduardo Coelho Silva, Fernando Alves De Carvalho Filho and Thauane Moura Garcia

Abstract: This work performs a quasi-systematic review in a structured way to identify, characterize and summarize the main evidence on Public ICT Governance in order to analyse the methods, techniques, models, framework, guides and / or Public ICT Governance good practice to describe its application in an IT environment, helping ICT managers through a secondary study. A research question was raised and an initial study of 4870 works was adopted. Among these, 21 were selected for the construction of this study through a characterization of three components (Leadership, Strategy and Control) which encompasses the dimensions that define the level of maturity of a public organization (iGovTI). In this analysis and characterization, it was identified that the methods techniques, models, framework, guides and/or good practice found were the most mentioned at the Academy, as COBIT, ITIL and CMMI, as also others who are not known.

Paper Nr: 143
Title:

Project Scope Management: A Strategy Oriented to the Requirements Engineering

Authors:

Igor Luiz Lampa, Allan de Godoi Contessoto, Anderson Rici Amorim and Geraldo Francisco Donegá Zafalon

Abstract: Scope management is an area of project management defined by PMBoK, which has processes to register and control everything that belongs to the project boundaries. Although the relevance of this area to the success of projects, its application is still a challenge, which is potentialized by the lack of computational tools to support project management that integrate the scope management totality. In addition, the lack of understanding of project requirements is another factor that hinders the execution of this area, because often the stakeholders do not have full knowledge of their needs at the beginning of the project, resulting in changes throughout the project lifecycle, which increases the costs and deadlines. In this sense, the objective of this work is to propose the integration of the scope management with the requirements engineering, in order to better identify the requirements of a project and to understand what needs to be done, contributing to the success of projects. For the evaluation of the results obtained, a requirements engineering module was developed and integrated with a previously developed computational tool for project management and it aims to assist the application of the project management following the guidelines and good practices proposed on PMBoK.

Paper Nr: 160
Title:

TAXOPETIC Process Design - A Taxonomy to Support the PETIC Methodology (Strategic Planning of ICT)

Authors:

Adriana de Melo Fontes, Denisson Santana dos Santos and Thauane Moura Garcia

Abstract: Innovations in organizations require better solutions for technology improvement, quality assurance and customers’ business satisfaction. On the other hand, Strategic Planning (SP) and Information and Communication Technologies (ICT) need to be integrated and coherent, to ensure the survival of organizations. In this context, the Strategic Planning of ICT Methodology (PETIC) is a SP that carefully helps managers to identify the maturity of ICT processes required for company management. The increasing number of PETIC methodology applications in organizations has made it difficult to locate and classify the PETIC artifacts produced. Moreover, the use of taxonomies has been successfully applied for classification and information retrieval. This paper aims on proposing TAXOPETIC, the taxonomy to support the PETIC Methodology. It will also be used to implement a software called TAXOPETICWeb that will allow storage and classification of PETIC artifacts, as well as facilitate the process of searching these artifacts.

Paper Nr: 163
Title:

Procedural x OO - A Corporative Experiment on Source Code Clone Mining

Authors:

José Jorge Barreto Torres

Abstract: Open Source Software (OSS) repositories are widely used to execute studies around code clone detection, mostly inside the public scenario. However, corporative code Repositories have their content restricted and protected from access by developers who are not part of the company. Besides, there are a lot of questions regarding paradigm efficiency and its relation to clone manifestation. This article presents an experiment performed on systems developed in a large private education company, to observe and compare the incidence of cloned code between Object Oriented and Procedural proprietary software, using an exact similarity threshold. The results indicate that Object Oriented Software wondrously showed higher cloned lines of code incidence and a similar use of abstraction (clone sets) for functions or methods.

Paper Nr: 173
Title:

SPMDSL Language Model - Onto a DSL for Agile Use case driven Software Projects’ Management

Authors:

Gilberto G. Gomes Ribeiro

Abstract: Project management involves applying knowledge, skills, tools and techniques to project activities to meet the project requirements. Each project’s unique nature implies tailoring that knowledge, skills, tools and techniques to adapt the management activities to cope with project constraints. Management and technical activities meet at some points, namely on activities that have technical and management relevance. This paper proposes SPMDSL and presents its language model and the domain analysis made during its development. SPMDSL aims to be a DSL defining a set of representational primitives with which to model projects in the domain of agile software project management. These primitives are represented as classes and their interrelationships. The proposed DSL focuses on agile use case driven software development project management, and so it also integrates concepts from software modeling. The goal is to enable representing past projects’ information to facilitate retrieving information for lessons learned analysis.

Paper Nr: 224
Title:

Location Aware Information System for Non-intrusive Control of Remote Workforce with the Support of Business IT Consumerization

Authors:

Sergio Ríos-Aguilar and Francisco Javier LLoréns-Montes

Abstract: This work proposes a Mobile Information System that can serve HR departments in companies to conduct remote workforce location-based control, by means of a non-intrusive use of employees’ own smartphones, taking benefit from the IT Consumerization phenomenon. This proposal provides quantitative and qualitative references that should be met with respect to the location information accuracy needed in common control scenarios for the remote workforce. A fully working prototype of the proposed Mobile Information System was developed to evaluate the validity of the strict accuracy and precision requirements proposed for location data, using a standard check-in process at remote workplaces under real world conditions. The results obtained in this study confirm that at present it is viable for companies to implement an Information System for the control of remote workforce that allows the companies to gain competitiveness, adopting a BYOD paradigm which allows their employees to use their own smartphone mobile devices in the workplace.

Paper Nr: 228
Title:

DOTSIM - A Simulation-based Optimization Methodology for the Optimal Duplication Sequence on Freight Transportation Systems

Authors:

Heygon Araujo and Samyr Vale

Abstract: The definition of the best sequence on route duplication of freight systems consists of a complex NP-hard problem. There exists a huge variety of meta-heuristics (MH) capable of generating satisfactory solutions. However, it is fastidious to know which MH will produce the best solution for a Duplication Sequence Problem (DSP). This paper proposes a process development methodology which guides to evaluate the best duplication sequence comparing the MH's performance with existing approaches such as linear analytical method (LAM). The potential of this methodology is demonstrated by a case study in railways.

Paper Nr: 234
Title:

A Specification and Execution Approach of Flexible Cloud Service Workflow based on a Meta Model Transformation

Authors:

Imen Ben Fraj

Abstract: Cloud environments are being increasingly used for deploying and executing workflow composed from cloud services. In this paper, we propose a meta-model transformation for the specification and the execution of cloud service flexible workflows. To built workflow abstract models, the proposed approach uses a BPMN model for the specification of the cloud service workflow structure and the state-chart diagram for the specification of the cloud service workflow behavior. In addition, workflow models should be translated into BPEL4WS language which will be executed by the BPEL4WS engine, the latter is driven by the behavior described by the state-chart diagram. To fulfill, we define a set of meta-model transformations from the platform independent model (BPMN) to the platform specific model (BPEL4WS).

Paper Nr: 248
Title:

Model of Radical Changes and Introduction of Discrete Production Management System

Authors:

Evgeny Abakumov

Abstract: This article describes the information system of the discrete type production management and its implementation using the model of radical changes by J. Kotter. The need for organizational changes is caused by the necessity of a deep transformation of business processes of discrete production in the light of significant changes in the modern information technology. These changes offer great opportunities for improving the efficiency of enterprises in their main area of activity. However, the resistance of the staff changes makes it necessary to find solutions to overcome it.

Paper Nr: 281
Title:

A Systematic Mapping of Software Requirements Negotiation Techniques

Authors:

Lucas Tito, Alexandre Estebanez and Andréa Magalhães

Abstract: [Context] Eliciting requirements is a commonly discussed task. However, after they are ready, it is essentially important for a software project that these requirements are sufficient for stakeholders to reach their goals. Therefore, techniques to negotiate schedule, price, quality, and scope among stakeholders are important. [Goal] This paper aims at identifying and presenting characteristics of techniques that have been proposed and/or used to negotiate software requirements. [Method] A mapping study was planned and conducted to identify techniques and to capture their characteristics. Those characteristics include description, environment (e.g. academic, industrial), the types of research being published, and the types of primary studies. The main findings of the papers, and the advantages and disadvantages reported for theses techniques were also summarized. [Results] We mapped the characteristics of 10 different requirements negotiation techniques identified in 33 papers which met our inclusion criteria. We found that most of the identified techniques can be seen as variations of the seminal WinWin requirements negotiation technique proposed in 1994. [Conclusions] The conducted mapping study provides an interesting overview of the area and may also be useful to ground future research on this topic.

Paper Nr: 309
Title:

Correlation between Similarity and Variability Metrics in Search-based Product Line Architecture: Experimental Study and Lessons Learned

Authors:

Yenisei Delgado Verdecia and Thelma Elita Colanzi

Abstract: The Product Line Architecture (PLA) plays a central role at the products development from a Software Product Line (SPL). PLA design is a people-intensive and non-trivial task. So, PLA design can be considered a hard problem which could be formulated as an optimization problem with many factors to be solved by search algorithms. In this sense, the approach named MOA4PLA (Multi-Objective Approach for Product-Line Architecture Design) was proposed to automatically identify the best alternatives for a PLA design. This approach originally included metrics to evaluate basic design principles, feature modularization, design elegance and SPL extensibility. However, there are other relevant properties for PLA design. For this reason, the evaluation model of MOA4PLA was extended with metrics to measure the level of similarity and adaptability of the PLA. The objective of this work is to investigate the possible correlation between the metrics related to similarity and variability in order to decrease the number of functions to be optimized. To do this, three experiments were carried out. Empirical results allow to learn some lessons regarding to these metrics in the referred context.

Paper Nr: 321
Title:

Automatic Semantic Annotation: Towards Resolution of WFIO Incompatibilities

Authors:

Chahrazed Tarabet

Abstract: Inter-organizational workflows (IOWF) allow for orchestration of processes between different organizations, but the incompatibility they reveal poses a serious problem. Nevertheless, there are approaches that can remedy this problem, notably the semantic annotation. In this position paper, we will present a study whose objective is to address the detection and correction of these incompatibilities between workflow partners. For this purpose, amelioration, optimization and automation are necessary for the semantic annotation phase of inter-organizational workflows, in order to achieve the IOWF incompatibility resolution.

Area 4 - Software Agents and Internet Computing

Full Papers
Paper Nr: 77
Title:

Lexical Context for Profiling Reputation of Corporate Entities

Authors:

Jean-Valère Cossu and Liana Ermakova

Abstract: Opinion and trend mining on micro-blogs like Twitter recently attracted research interest in several fields including Information Retrieval (IR) and Natural Language Processing (NLP). However, the performance of existing approaches is limited by the quality of available training material. Moreover, explaining automatic systems’ suggestions for decision support is a difficult task thanks to this lack of data. One of the promising solutions of this issue is the enrichment of textual content using large micro-blog archives or external document collections, e.g. Wikipedia. Despite some advantages in Reputation Dimension Classification (RDC) task pushed by RepLab, it remains a research challenge. In this paper we introduce a supervised classification method for RDC based on a threshold intersection graph. We analyzed the impact of various micro-blogs extension methods on RDC performance. We demonstrated that simple statistical NLP methods that do not require any external resources can be easily optimized to outperform the state-of-the-art approaches in RDC task. Then, the conducted experiments proved that the micro-blog enrichment by effective expansion techniques can improve classification quality.

Paper Nr: 78
Title:

myBee: An Information System for Precision Beekeeping

Authors:

Luis Gustavo Araujo Rodriguez, João Almeida de Jeus, Vanderson Martins do Rosário, Anderson Faustino da Silva and Lucimar Pontara Peres

Abstract: Over the last few years, beekeepers have become aware of the necessity of integrating Information Technology into Apiculture. As a result, Precision Beekeeping emerged, which is an area that applies the said technology to monitor bees and, consequently, the state of the colony. However, the development of these platforms is complex due to the requirement of them adapting to various heterogeneous environments and constantly-updated technologies. Thus, the objective of this paper is to propose and develop a Precision Beekeeping Information System, called myBee, whose infrastructure is flexible, secure, fault tolerant, and efficient in decision-making. A real case study shows that myBee is an efficient information system that supports beekeepers in maintaining their apiaries.

Paper Nr: 85
Title:

A Multi-criteria Scoring Method based on Performance Indicators for Cloud Computing Provider Selection

Authors:

Lucas Borges de Moraes

Abstract: Cloud computing is a service model that allows hosting and on demand distribution of computing resources all around the world, via Internet. Thus, cloud computing has become a successful paradigm that has been adopted and incorporated into virtually all major known IT companies (e.g., Google, Amazon, Microsoft). Based on this success, a large number of new companies were competitively created as providers of cloud computing services. This fact hindered the clients’ ability to choose among those several cloud computing providers the most appropriate one to attend their requirements and computing needs. This work aims to specify a logical/mathematical multi-criteria scoring method able to select the most appropriate(s) cloud computing provider(s) to the user (customer), based on the analysis of performance indicator values desired by the customer and associated with every cloud computing provider that supports the demanded requirements. The method is a three stages algorithm that evaluates, scores, sorts and selects different cloud providers based on the utility of their performance indicators for each specific user of the method. An example of the method’s usage is given in order to illustrate its operation.

Paper Nr: 118
Title:

The Use of Time Dimension in Recommender Systems for Learning

Authors:

Eduardo José de Borba

Abstract: When the amount of learning objects is huge, especially in the e-learning context, users could suffer cognitive overload. That way, users cannot find useful items and might feel lost in the environment. Recommender systems are tools that suggest items to users that best match their interests and needs. However, traditional recommender systems are not enough for learning, because this domain needs more personalization for each user profile and context. For this purpose, this work investigates Time-Aware Recommender Systems (Context-aware Recommender Systems that uses time dimension) for learning. Based on a set of categories (defined in previous works) of how time is used in Recommender Systems regardless of their domain, scenarios were defined that help illustrate and explain how each category could be applied in learning domain. As a result, a Recommender System for learning is proposed. It combines Content-Based and Collaborative Filtering approaches in a Hybrid algorithm that considers time in Pre-Filtering and Post-Filtering phases.

Paper Nr: 122
Title:

Information Quality in Social Networks: Predicting Spammy Naming Patterns for Retrieving Twitter Spam Accounts

Authors:

Mahdi Washha and Aziz Qaroush

Abstract: The popularity of social networks is mainly conditioned by the integrity and the quality of contents generated by users as well as the maintenance of users’ privacy. More precisely, Twitter data (e.g. tweets) are valuable for a tremendous range of applications such as search engines and recommendation systems in which working on a high quality information is a compulsory step. However, the existence of ill-intentioned users in Twitter imposes challenges to maintain an acceptable level of data quality. Spammers are a concrete example of ill-intentioned users. Indeed, they have misused all services provided by Twitter to post spam content which consequently leads to serious problems such as polluting search results. As a natural reaction, various detection methods have been designed which inspect individual tweets or accounts for the existence of spam. In the context of large collections of Twitter users, applying these conventional methods is time consuming requiring months to filter out spam accounts in such collections. Moreover, Twitter community cannot apply them either randomly or sequentially on each user registered because of the dynamicity of Twitter network. Consequently, these limitations raise the need to make the detection process more systematic and faster. Complementary to the conventional detection methods, our proposal takes the collective perspective of users (or accounts) to provide a searchable information to retrieve accounts having high potential for being spam ones. We provide a design of an unsupervised automatic method to predict spammy naming patterns, as searchable information, used in naming spam accounts. Our experimental evaluation demonstrates the efficiency of predicting spammy naming patterns to retrieve spam accounts in terms of precision, recall, and normalized discounted cumulative gain at different ranks.

Paper Nr: 128
Title:

A Multicriteria Evaluation of Hybrid Recommender Systems: On the Usefulness of Input Data Characteristics

Authors:

Reinaldo Silva Fortes

Abstract: Recommender Systems (RS) may behave differently depending on the characteristics of the input data, encouraging the development of Hybrid Filtering (HF). There are few works in the literature that explicitly characterize aspects of the input data and how they can lead to better HF solutions. Such work is limited to the scope of combination of Collaborative Filtering (CF) solutions, using only rating prediction accuracy as an evaluation criterion. However, it is known that RS also need to consider other evaluation criteria, such as novelty and diversity, and that HF involving more than one approach can lead to more effective solutions. In this work, we begin to explore this under-investigated area, by evaluating different HF strategies involving CF and Content-Based (CB) approaches, using a variety of data characteristics as extra input data, as well as different evaluation criteria. We found that the use of data characteristics in HF proved to be useful when considering different evaluation criteria. This occurs in spite of the fact that the experimented methods aim at minimizing only the rating prediction errors, without considering other criteria.

Paper Nr: 147
Title:

Skyline Modeling and Computing over Trust RDF Data

Authors:

Amna Abidi and Mohamed Anis Bach Tobji

Abstract: Resource Description Framework (RDF) data come from various sources whose reliability is somtimes questionable. Therefore, several researchers enriched the basic RDF data model with trust information. New methods to represent and reason with trust RDF data are introduced. In this paper, we are interested in querying trust RDF data. We particularly tackle the skyline problem, which consists in extracting the most interesting trusted resources according to user-defined criteria. To this end, we first redefined the dominance relationship in the context of trust RDF data. Then, we proposed an appropriate semantics of the trust-skyline; the set of most interesting resources in a trust RDF dataset. Efficient methods to compute the trust-skyline are provided and compared to some existing approaches as well. Experiments led on the algorithms implementations showed promising results.

Paper Nr: 162
Title:

ESKAPE: Information Platform for Enabling Semantic Data Processing

Authors:

André Pomp and Alexander Paulus

Abstract: Over the last years, many Internet of Things (IoT) platforms have been developed to manage data from public and industrial environmental settings. To handle the upcoming amounts of structured and unstructured data in those fields, a couple of these platforms use ontologies to model the data semantics. However, generating ontologies is a complex task since it requires to collect and model all semantics of the provided data. Since the (Industrial) IoT is fast and continuously evolving, a static ontology will not be able to model each requirement. To overcome this problem, we developed the platform ESKAPE, which uses semantic models in addition to data models to handle batch and streaming data on an information focused level. Our platform enables users to process, query and subscribe to heterogeneous data sources without the need to consider the data model, facilitating the creation of information products from heterogeneous data. Instead of using a pre-defined ontology, ESKAPE uses a knowledge graph which is expanded by semantic models defined by users upon their data sets. Utilizing the semantic annotations enables data source substitution and frees users from analyzing data models to understand their content. A first prototype of our platform was evaluated by a user study in form of a competitive hackathon, during which the participants developed mobile applications based on data published on the platform by local companies. The feedback given by the participants reveals the demand for platforms that are capable of handling data on a semantic level and allow users to easily request data that fits their application.

Paper Nr: 301
Title:

Textual Analysis for the Protection of Children and Teenagers in Social Media - Classification of Inappropriate Messages for Children and Teenagers

Authors:

Thársis Salathiel de Souza Viana, Marcos de Oliveira and Ticiana Linhares Coelho da Silva

Abstract: Nowadays the Internet is widely used by children and teenagers, where privacy and exposure protection are often not prioritised. This can leave them exposed to paedophiles, who can use a simple chat to start a conversation, which may be the first step towards sexual abuse. In the paper (Falcão Jr. et al, 2016), the authors proposed a tool to detect possible dangerous conversations for a minor in a social network, based on the minor's behaviour. However, the proposed tool does not thoroughly address the analyses of the messages exchanged and attempts to detect the suspicious ones in a chat conversation using a superficial approach. This project aims to extend (Falcão Jr. et al, 2016) by automatically classifying the messages exchanged between a minor and an adult in a social network, hence to separate the ones that seem to come from a paedophile from those that seem to be a normal conversation. An experiment with a real conversation was done to test the effectiveness of the created model.

Paper Nr: 306
Title:

Information Quality in Online Social Networks: A Fast Unsupervised Social Spam Detection Method for Trending Topics

Authors:

Mahdi Washha, Dania Shilleh and Yara Ghawadrah

Abstract: Online social networks (OSNs) provide data valuable for a tremendous range of applications such as search engines and recommendation systems. However, the easy-to-use interactive interfaces and low barriers of publications have exposed various information quality (IQ) problems, decreasing the quality of user-generated content (UGC) in such networks. The existence of a particular kind of ill-intentioned users, so-called social spammers, imposes challenges to maintain an acceptable level of information quality. Social spammers simply misuse all services provided by social networks to post spam contents in an automated way. As a natural reaction, various detection methods have been designed, which inspect individual posts or accounts for the existence of spam. The major limitations of these methods are supervised learning-based requiring ground truth data-sets. Moreover, the account-based detection methods are not practical for processing ”crawled” large collections of social posts, requiring months to process such collections. Post-level detection methods also have another drawback in adapting robustly the dynamic behavior of spammers because of the weakness of features in discriminating among spam and non-spam, although of applicability of such methods in regards of time. Hence, in this paper, we introduce a design of an unsupervised learning approach dedicated for detecting spam accounts (or users) existing in large collections of trending topics, from a collective perspective point of view. More precisely, our method leverages the available simple meta-data about users and the published posts (tweets) related to a topic, as heuristic information, to find any correlation among spam users acting as a spam campaign. Compared to the supervised learning methods, our experimental evaluation demonstrates the efficiency of predicting spam accounts (users) in terms of accuracy, precision, recall, and F-measure performance metrics.

Paper Nr: 325
Title:

Incorporating Situation Awareness into Recommender Systems

Authors:

Jeremias Dötterl

Abstract: Nowadays, smartphones and sensor devices can provide a variety of information about a user's current situation. So far, many recommender systems neglect this kind of information and thus cannot provide situation-specific recommendations. Situation-aware recommender systems adapt to changes in the user's environment and therefore are able to offer recommendations that are more appropriate for the current situation. In this paper, we present a software architecture that enables situation awareness for arbitrary recommendation techniques. The proposed system considers both (semi-)static user profiles and volatile situational knowledge to obtain meaningful recommendations. Furthermore, the implementation of the architecture in a museum of natural history is presented, which uses Complex Event Processing to achieve situation awareness.

Short Papers
Paper Nr: 25
Title:

Web of Goals: A Proposal for a New Highly Smart Web

Authors:

Meriem Benhaddi

Abstract: Since Web use revolution known as Web 2.0, and the birth of a third version which is the semantic Web or Web 3.0, users needs have kept changing and becoming more demanding in all aspects of life (health, education, economy, etc), giving rise to a new wave of principles that have emerged to constitute a new smart web called Web 4.0, encompassing new principles, concepts and technologies that bring new solutions. Until today there is no exact definition of Web 4.0, much less a definition of architectural principles; however, Web 4.0 consists of the new Web generation that is built on Web 3.0 and Web 2.0 principles, in addition to new notions such as artificial intelligence, mind controlled interfaces and intelligent goal searching engines. Web 4.0 offers more autonomy and creative opportunities to end users in order to quickly reach their goals by efficiently express their needs, create new applications or adapt existing ones to their personal contexts. In this paper, we give our own definition to the new smart Web 4.0 by highlighting what makes it different from the earlier Web versions; then we propose architecture elements that will allow transforming the Web into an Ultra-Intelligent Electronic Agent. We introduce a motivational scenario that illustrates and nurtures the feasibility of our point of view.

Paper Nr: 51
Title:

A Trust Reputation Architecture for Virtual Organization Integration in Cloud Computing Environment

Authors:

Luís Felipe Bilecki

Abstract: Virtual Organization (VO) represents a prominent collaboration initiative, where a set of entities share competencies and risks attending a common goal. Moreover, their interactions can be supported in an Internet basis, using Cloud Computing (CC) resources. The VO and CC integration brings several benefits, such as: reduction of costs and maintenance, interoperability, among others. However, there are issues related to privacy, trust and security that need to be addressed. One of the issues observed is how much trust VO members put in the cloud provider (CP), particularly, in a scenario where VO members use the resources provided by a CP to made available their services and in order to interact with other members. Thus, the proposed reputation architecture intends to assist the decision-making processes present in the VO’s life-cycle reputing CP trust. The reputed trust is based on two sources: a) objective (Quality of Service (QoS) indicators) and b) subjective (feedback from users regarding those QoS indicators). The evaluation results show that the architecture is resilient to attacks on subjective trust during the reputation calculation. Also, it is possible to note that the proposed architecture presents an acceptable average time for each one operation, and a significant role during VO’s creation and operation.

Paper Nr: 110
Title:

A Real-time Targeted Recommender System for Supermarkets

Authors:

Panayiotis Christodoulou

Abstract: Supermarket customers find it difficult to choose from a large variety of products or be informed for the latest offers that exist in a store based on the items that they need or wish to purchase. This paper presents a framework for a Recommender System deployed in a supermarket setting with the aim of suggesting real-time personalized offers to customers. As customers navigate in a store, iBeacons push personalized notifications to their smart-devices informing them about offers that are likely to be of interest. The suggested approach combines an Entropy-based algorithm, a Hard k-modes clustering and a Bayesian Inference approach to notify customers about the best offers based on their shopping preferences. The proposed methodology improves the customer's overall shopping experience by suggesting personalized items with accuracy and efficiency. Simultaneously, the properties of the underlying techniques used by the proposed framework tackle the data sparsity, the cold-start problem and other scalability issues that are often met in Recommender Systems. A preliminary setup in a local supermarket confirms the validity of the proposed methodology, in terms of accuracy, outperforming the traditional Collaborative Filtering approaches of user-based and item-based.

Paper Nr: 133
Title:

Is Products Recommendation Good? An Experiment on User Satisfaction

Authors:

Jaime Wojciechowski, Rafael Romualdo Wandresen and Rafaela Mantovani Fontana

Abstract: Recommendation systems may use different algorithms to present relevant information to users. In e-commerce contexts, these systems are essential to provide users with a customized experience. Several studies have evaluated different recommendation algorithms against their accuracy, but only a few evaluate algorithms from the user satisfaction viewpoint. We here present a study that aims to identify how different recommendation algorithms trigger different perceptions of satisfaction on users. Our research approach was an experiment using products and sales data from a real small retailer. Users expressed their satisfaction perception for three different algorithms. The study results show that the algorithms proposed did not trigger different perceptions of satisfaction on users, giving clues of improvements to small retailers websites.

Paper Nr: 260
Title:

Run-time Software Upgrading Framework for Mission Critical Network Applications

Authors:

Seung-Woo Hong

Abstract: In mission critical and safety software applications such as internet infrastructure, telecommunication, military and medical applications, service continuity is very important. Since for these applications it is unacceptable to shut-down and restart the system during software upgrade, run-time software upgrade techniques, which are deployed for online maintenance and upgrades without shutdown the system, can meet the demand for high levels of system availability and service continuity. However, upgrading an application while it is running without shut-down is a complex process. The new and the old component may differ in the functionality, interface, and performance. Only selected components of an application are changed while the other parts of the application continue to function. It is important to safeguard the software application’s integrity when changes are implemented at runtime. Various researchers have employed different tactics to solve the problem of run-time software upgrade such as compiler-based methods, hardware-based method, and analytic redundancy based. In order to ensure a reliable run-time upgrade, we designed and implemented a software framework based run-time software upgrading method, which has the ability to make runtime modification is considered at the software architecture-level. In this paper, we present the software component architecture for run-time upgrade and software upgrade procedure, and then show the implementation results.

Paper Nr: 261
Title:

A Personal Analytics Platform for the Internet of Things - Implementing Kappa Architecture with Microservice-based Stream Processing

Authors:

Theo Zschörnig

Abstract: The foundation of the Internet of Things (IoT) consists of different devices, equipped with sensors, actuators and tags. With the emergence of IoT devices and home automation, advantages from data analysis are not limited to businesses and industry anymore. Personal analytics focus on the use of data created by individuals and used by them. Current IoT analytics architectures are not designed to respond to the needs of personal analytics. In this paper, we propose a lightweight flexible analytics architecture based on the concept of the Kappa Architecture and microservices. It aims to provide an analytics platform for huge numbers of different scenarios with limited data volume and different rates in data velocity. Furthermore, the motivation for and challenges of personal analytics in the IoT are laid out and explained as well as the technological approaches we use to overcome the shortcomings of current IoT analytics architectures.

Paper Nr: 264
Title:

Generation and Transportation of Transaction Documents using Payment Infrastructure

Authors:

Gatis Vitols

Abstract: Mobile payments are rapidly increasing during e-commerce transactions. Industry has well established procedures how to process the payments. However processes of management of transaction documents (e.g. warranty cards, insurance policies, etc.) are still undeveloped and raise issues of information system and document format fragmentation. This paper address transaction document management issue with introduction of improvements in payment procedures, such as transaction processing using multiple payment methods for single payment, point of interaction dividing into elements for further dividing of needs for associate parties, transaction messaging schemes, which can be used for transportation of transaction documents. Aim of this research is to propose improvements of payment processing process models by introducing generation and transportation of transaction documents using unified documents and concepts. Improved procedures are further planned to implement in micropayment company payment processing processes for approbation. The results show that distribution of transaction document into multiple types provides the ability to track generation steps of transaction documents and to correspond these steps with transaction processing results and messages, by therefore making these two non-connected before processes as whole new global service processing process. Proposed improvements of mobile payment infrastructure support creation and management of transaction documents using unified documents and concepts.

Posters
Paper Nr: 235
Title:

The Incorporation of Drones as Object of Study in Energy-aware Software Engineering

Authors:

Luis Corral

Abstract: As drones expand their ability to perform longer and more complex tasks, one of the first concerns that rise is their capacity to perform those tasks in a reliable way. Reliability can be understood from different aspects: the ability of the drone to perform accurately, safely and autonomously. In this paper, we focus on understanding the current efforts to ensure the last quality, autonomy, from the point of view of energy-awareness for drone systems. It emerges that drones as object of study in energy aware Software Engineering is still an emerging, unexplored area, which requires to learn from advances and experimentation in other mobile and ubiquitous devices like cellular phones or tablets. Still, it is required to understand the opportunities and limitations of drones as computational targets. A research agenda should be set and followed to leverage software as an opportunity to foster drones as energy-aware devices.

Paper Nr: 270
Title:

Workflow for the Internet of Things

Authors:

Debnath Mukherjee

Abstract: Business Processes are an important part of a business. Businesses need to meet the SLA (Service Level Agreements) required by the customers. KPI (Key Performance Indicators) measure the efficiency and effectiveness of the business processes. Meeting SLA and improving the KPIs is the goal of an organization. In this paper, we describe the benefits of workflow technology for the IoT (Internet of Things) world. We discuss how workflows enable tracking of the state of various processes, thus giving the business owner an insight into the state of the business. We discuss how by defining IoT workflows, prediction of imminent violation of SLA can be achieved. We describe how IoT workflows can be triggered by the low level IoT messages. Finally, we show the architecture of an IoT workflow management system and present experimental results.

Paper Nr: 300
Title:

A Context Aware Approach for Promoting Tourism Events: The Case of Artist’s Lights in Salerno

Authors:

Francesco Colace and Saverio Lemma

Abstract: This paper introduces a Context Aware App for the tourism. This app is based on a graphical formalism for the context representation: the Context Dimension Tree. The aim is to propose a Context Aware approach that acts as dynamic support for the tourists, equipped of a mobile device which reacts to a change of context adapting user interface, according to his/her current position and global profile. For example, the system can guide the tourist in the discovery of a town proposing him/her events mainly interesting for the user. A case study applied to a Christmas event in Salerno, an Italian town, has been analyzed considering various users (Italian tourists, foreign tourists, etc.) and an experimental campaign has been conducted, obtaining interesting results.

Area 5 - Human-Computer Interaction

Full Papers
Paper Nr: 58
Title:

Behavioral Economics in Information Systems Research: A Persuasion Context Analysis

Authors:

Michael Oduor and Harri Oinas-kukkonen

Abstract: In recent years, there has been growth in information systems (IS) research applying psychological theories focusing on peoples’ perception towards use of technology and how technology can motivate positive change. Behavioral economics–grounded in cognitive and psychological principles–on the other hand studies irrationalities in peoples’ behavior from an economics perspective and is a field that has lately been starting to gain credence in IS literature. This study’s aim is to establish the depth of behavioral economics studies in IS research by reviewing the basket of eight journals using the persuasive systems design model as an analytical tool. From this extant literature, similarities and complementary properties with other disciplines can be integrated, and improved methods of understanding users and their actions can be used for better prevention and intervention techniques especially in the domains of health IS and sustainability or Green IS.

Paper Nr: 73
Title:

Audio Description on Instagram: Evaluating and Comparing Two Ways of Describing Images for Visually Impaired

Authors:

João Marcelo dos Santos Marques, Luiz Fernando Gopi Valente and Simone Bacellar Leal Ferreira

Abstract: The social network Instagram encourages interactions among users around audio-visual content (pictures and short duration videos). However, this type of content still presents itself as a barrier for the visually impaired. To mitigate this problem, screen readers can be used, but those only work for images which have texts in the form of subtitles. Audio description, on the other hand, is a technique that describes visual images into words, allowing the comprehension of these elements. This technique has been used in many fields, fostering a scenario of inclusion and opportunities for this public. The objective of this paper is to evaluate and compare these two forms of describing images published on Instagram: one utilizing the descriptive text read by the screen reader and another utilizing audio description recorded by the image’s own author. Through an empirical study, we have identified the form of image description preferred by the visually impaired participants and if the use of audio description on Instagram would encourage its use by this public.

Paper Nr: 80
Title:

Multiple-perspective Visual Analytics for GRC Platforms

Authors:

Vagner F. de Santana, David Byman and Nathaniel Mills

Abstract: GRC (Governance, Risk, and Compliance) data is voluminous and highly interrelated, yet sparsely populated. This fact represents one of the biggest challenges when creating visualizations for such datasets: the data does not align well in a tabular structure typically used to populate displays and reports. GRC Platforms provide reporting capabilities and data visualization techniques to summarize data, yet most common GRC visualizations are restricted to certain inflexible perspectives, e.g., Risk Matrix. This work presents a Visual Analytics system that provides multiple visual perspectives over GRC data. The evaluation of the system involved four GRC specialists. The results show that the multiple perspectives approach supports the summarization of different portions of the GRC data, especially regarding business process and business entity taxonomies, and risk/control relationships. The results provide useful insights for specialists working to explore and summarize GRC data and to integrate Visual Analytics Systems with GRC platforms. In addition, the multiple-perspective approach presented could also be applied in systems sharing the same data structure GRP Platforms use.

Paper Nr: 93
Title:

Influence of Human Personality in Software Engineering - A Systematic Literature Review

Authors:

Anderson S. Barroso and Jamille S. Madureira da Silva

Abstract: Personality of software engineering professionals has been a continuous element of interest in academic research. Researchers have applied different models of personality analysis in various software engineering areas to identify improvement points, to promote job satisfaction and to better organize teams. This paper aims to conduct a study, by means of a systematic literature review (SLR), to evaluate personality models applied in software engineering and to understand how human personality influences professional’s work. Three main models, most frequently used, were identified (MBTI, BIG 5 and FFM) to evaluate software engineering professionals. There is evidence of the influence of personality on the activities performed. However, some results have suggested that the study of personality is not an easy task to be performed, because there are contradictions in findings that challenges the validity of studies.

Paper Nr: 97
Title:

Relationship between Personality Traits and Software Quality - Big Five Model vs. Object-oriented Software Metrics

Authors:

Anderson S. Barroso, Jamille S. Madureira da Silva, Thiago D. S. Souza and Bryanne S. de A. Cezario

Abstract: The activity of analyzing personality of software developers has been a topic discussed by many researchers over the past few years. However, their relation to software metrics has hardly been mentioned in the literature. This work aims to identify the influence of human personality on quality of software products. At first, a psychological test was performed using the Big Five model for a set of developers working in industry and, subsequently, object-oriented software metrics were applied to individual software developed by members of the same group. As a result, it was evidenced, through statistical analysis, that the factors Consciousness, Neuroticism and Openness to Experience have a significant relationship with the Cyclomatic Complexity metric. In addition, factors Extroversion, Agreeableness and Neuroticism have significant relation with metric Coupling between Objects. In another analysis, taking into account ideal average values for each software metric, it was evidenced that Extroversion and Neuroticism factors have a significant relationship with metric Depth of Inheritance Tree. Extroversion and neuroticism were the only factors that obtained a significant relation with software metrics in the two proposed analyzes. Therefore, additional studies are needed to determine any deeper connection between personality and software quality.

Paper Nr: 98
Title:

An Experiment to Assess an Acquisition Platform and Biomedical Signal Conditioning

Authors:

Diego Assis Siqueira Gois and João Paulo Andrade Lima

Abstract: As physical computing has grown and the concept of “Do it Yourself” (DIY) increased, various open-source electronics platforms emerged, such as Arduino and Raspberry pi. Still, these platforms aren't suited for acquisition and conditioning of biomedical signals. Inspired by the DIY concept, this paper presents a framework for acquisition and conditioning of biomedical signals composed of various interconnected, interchangeable, inter-configurable and reconfigurable boards, called YouMake. Moreover, they are low cost and have good documentation, making it easy for prototyping. The experimental evaluation of the platform was performed in a group of people who used it to show the level of usability and the time spent. The results showed that there are no statistical differences between the groups “with experience” and “without experience”, and even more, that it can reliably be used for a low cost alternative for acquisition and conditioning of biomedical signals.

Paper Nr: 170
Title:

Towards Interactive Data Processing and Analytics - Putting the Human in the Center of the Loop

Authors:

Michael Behringer

Abstract: Today, it is increasingly important for companies to evaluate data and use the information contained. In practice, this is however a great challenge, especially for domain users that lack the necessary technical knowledge. However, analyses prefabricated by technical experts do not provide the necessary flexibility and are oftentimes only implemented by the IT department if there is sufficient demand. Concepts like Visual Analytics or Self-Service Business Intelligence involve the user in the analysis process and try to reduce the technical requirements. However, these approaches either only cover specific application areas or they do not consider the entire analysis process. In this paper, we present an extended Visual Analytics process, which puts the user at the center of the analysis. Based on a use case scenario, requirements for this process are determined and, later on, a possible application for this scenario is discussed that emphasizes the benefits of our approach.

Paper Nr: 283
Title:

Usability and User Experience Evaluation of Learning Management Systems - A Systematic Mapping Study

Authors:

Walter Takashi Nakamura

Abstract: Background: Advances in technology made possible the development of powerful platforms called Learning Management Systems (LMSs), designed to help the teaching and learning process. Studies show that usability and User Experience (UX) of such platforms may influence in this process. Although several studies had been conducted in this area, most of them are at initial stages and need improvements or deeper empirical studies. Aim: This work aims to analyze scientific publications in order to characterize the usability and UX evaluation techniques in the context of LMSs. Method: We performed a systematic mapping study regarding the usability and UX evaluation techniques in the context of LMSs. Results: A total of 62 publications were accepted in this mapping, which helped identifying the techniques used to evaluate the usability and UX of LMSs and their characteristics such as its origin, type, performing method, learning factors, restriction and availability. Conclusion: Several studies were conducted regarding the evaluation of LMSs. However, there are still some gaps such as the lack of techniques with some features, e.g., feedback with suggestions to correct the identified problems. Besides, there is no sufficient evidence of which of them is best suited for this context.

Short Papers
Paper Nr: 84
Title:

Facial Expression Recognition Improvement through an Appearance Features Combination

Authors:

Taoufik Ben Abdallah

Abstract: This paper suggests an approach to automatic facial expression recognition for images of frontal faces. Two methods of appearance features extraction is combined: Local Binary Pattern (LBP) on the whole face region and Eigenfaces on the eyes-eyebrows and/or on the mouth regions. Support Vector Machines (SVM), K Nearest Neighbors (KNN) and MultiLayer Perceptron (MLP) are applied separately as learning technique to generate classifiers for facial expression recognition. Furthermore, we conduct to the many empirical studies to fix the optimal parameters of the approach. We use three baseline databases to validate our approach in which we record interesting results compared to the related works regardless of using faces under controlled and uncontrolled environment.

Paper Nr: 144
Title:

Adaptation of Learning Object Interface based on Learning Style

Authors:

Zenaide Carvalho da Silva

Abstract: Learning styles (LS) refer to the ways and forms that the student prefers to learn in the teaching and learning process. Each student has their own way of receiving and processing information, and bearing in mind the learning style is important to better understand their individual preferences and to understand why certain teaching methods and techniques work better for some students, while for others they do not. We believe that knowledge of these styles enables the possibility of making propositions for teaching, thus reorganizing teaching methods and techniques in order to allow learning that is adapted to the individual needs of the student. This would be possible through the creation of online educational resources adapted to the style of the student. In this context, this article presents the structure of a learning object interface adaptation based on the learning style. This should enable the creation of the adapted learning object according to the student's learning style, contributing to the increase of student’s motivation in the use of a learning object as an educational resource.

Paper Nr: 172
Title:

Towards Personalised Multimedia Applications - A Literature Review

Authors:

Sebastian Sastoque H.

Abstract: Multimedia applications are now commonly used in daily life for several domains as marketing, health, learning and entertainment, among others. As the number of available applications increases, a competitive factor is the level of alignment to personal preferences. Indeed, the role of multimedia content has been crucial to generate user centred applications. However, multimedia content personalisation requires complex systems that execute diverse tasks such as representation, modelling, annotation and retrieval. Research on this field has been focused on content annotation and retrieval perspectives. Despite this, these domains do not address two of key personalisation factors, i.e., considering personal preferences and contextual knowledge. This work presents a literature review aimed to identify theoretical elements related to personalisation purposes, which could be integrated to the most common approaches. As a result, a road map for future research is established.

Paper Nr: 188
Title:

On the Development of Serious Games in the Health Sector - A Case Study of a Serious Game Tool to Improve Life Management Skills in the Young

Authors:

Tanja Korhonen and Raija Halonen

Abstract: The current research focuses on serious games (SG) in the healthcare sector. The objective was to identify the key phases in the design and development of SG and to study how serious game design takes into account affective computing. The case study describes the development of Game of My Life (GoML), a visual novel aiming to support the life management skills of adolescents. The game was developed in two phases using iterative agile methods in cooperation with different stakeholders. The evaluation indicates that GoML can be used as an effective discussion tool for professionals and patients in nursing and youth work. The results support our existing knowledge of SG development and reveal that SG design takes into account affective computing by nature: game design deliberately influences emotions in order to engage the players.

Paper Nr: 268
Title:

Collaborative, Social-networked Posture Training (CSPT) through Head-and-Neck Posture Monitoring and Biofeedbacks

Authors:

Da-Yin Liao

Abstract: This research is motivated by the need of a tool to train elementary/middle-school students to maintain good posture while sitting. We propose a collaborative, social-networked approach to design the posture training tool so that students can be aware of and timely improve their bad posture. The posture training tool is composed of a wearable posture training headset, a social-network App, and cloud storage and computing services. The wearable training headset is equipped with real-time sensors to monitor head and neck postures. The App provides biofeedback mechanisms of sound, voice, or vibration, to remind the students when their postures become bad. In the App, students and their guardians can review the posture history and the statistical analysis of their postures. Students can glance over their friends’ posture performance. Through this collaborative, social-networked approach, students of peer influences are thus encouraged to maintain good postures.

Paper Nr: 289
Title:

Context-aware Adaption of Software Entities using Rules

Authors:

Lauma Jokste and Jãnis Grabis

Abstract: Context-aware systems gain recognition in rapidly growing information systems market. Systems run time adaption based on contextual information have been considered as a powerful mean towards better systems performance which help to reach overall organizational goals and to improve key performance indicators. This paper describes the concept where information systems can be divided into many software entities and each of them can be context dependent. Context situation dependent software entity execution routines are observed and these observations are used to formulate Context dependency rules either manually or by machine learning. Rule based adaptation allows to monitor adaptation process in a transparent way and allows to take into account human knowledge in adaptation process. The entity based adaption allows for a uniform approach inducing context-dependency to different part of the software.

Paper Nr: 302
Title:

Supporting Decision Making during Emergencies through Information Visualization of Crowdsourcing Emergency Data

Authors:

Paulo Simões Jr., Pedro O. Raimundo and Renato Novais

Abstract: Decision making during an emergency response requires having the right information provided in the right way to the right people. Relevant information about an emergency can be provided by several sources, including the crowd at the place where the emergency is happening. A big challenge is how to avoid overwhelming the decision makers with unnecessary or redundant information provided by the crowd. Our hypothesis is that appropriate information visualization techniques improve the understanding of information sent by a crowd during an emergency. This work presents an approach for emergency information visualization, gathered through crowdsourcing, which improves context-aware decision making by keeping a real-time emergency state board. This approach was implemented in ERTK, as a proof of concept, and evaluated with 15 emergency management experts in Brazil. The yielded results show that our approach has the potential to assist a context-aware decision making during an emergency response.

Paper Nr: 310
Title:

Interpreting and Leveraging Browser Interaction for Exploratory Search Tasks

Authors:

Dominic Stange and Michael Kotzyba

Abstract: In this paper we introduce a novel approach for modeling and interpreting search behavior for exploratory search by using a so called exploration graph. We use an existing methodology of logging and analyzing user interactions with a web browser and add an additional interpretation step that can be used, e. g. to integrate sensemaking or browsing patterns into the log data. We conducted a user study and are able to show that: (a) interaction logs can be interpreted semantically, (b) semantic interpretations lead to a more connected exploration graph, and (c) multiple (even contradicting) interpretations of the same search behavior may exist at the same time. We also show how our theoretical model can be applied in the area of professional search by incorporating insights gained from the model into novel recommendation and machine learning approaches.

Posters
Paper Nr: 238
Title:

Comparing Usability, User Experience and Learning Motivation Characteristics of Two Educational Computer Games

Authors:

Omar Álvarez-Xochihua, Pedro J. Muñoz-Merino and Mario Muñoz-Organero

Abstract: Educational computer games are very popular nowadays and can bring a lot of benefits to improve the learning process. Usability, user experience and learning motivation are important factors in the design of educational computer-based games. Although there are existing educational games designed under these principles, there is a need of comparison between different educational tools in order to try to understand which design criteria can make a tool more successful than another. This work presents the results of a comparison between two competitive educational games. The study was conducted with 41 master students evaluating two competition-based educational computer games. The study, based on quantitative and qualitative data, has shown features that might drive to better usability, user experience and learning motivation. Additionally, we found a strong positive correlation among usability and user experience with learning motivation.

Paper Nr: 256
Title:

Guideline for Designing Accessible Systems to Users with Visual Impairment: Experience with Users and Accessibility Evaluation Tools

Authors:

Caroline Guterres Silva

Abstract: Nowadays, society uses computer systems in diverse day to day activities, such as shopping, social interaction, study, research, etc.; however, a considerable number of the population, who has some kind of special necessity, faces difficulties in using those systems for various reasons, for example, there are codes not written in a way that allows screen readers to identify the menus, contents, etc., to make the correct reading for users. In that context, this paper contains the description of a research done to identify guidelines and/or techniques that address a code document to facilitate interaction between the visually impaired and computer. By applying those guidelines to a prototype and then submitting it to testing with visually disabled users, it was observed that the source code was more legible for screen readers and user interaction was facilitated; however, during user testings, improvements that could be applied to the existing guidelines were observed. Beside user testings, this paper reports a research on automated validators and their criteria on source code's accessibility. It is noted that this automated verification does not exclude tests involving users, because both tests are important in the process of accessibility assurance.

Paper Nr: 293
Title:

Improving Healthcare through Human City Interaction

Authors:

Tim Woolliscroft and Simon Polovina

Abstract: The study of information technology has given insufficient focus to a) the structural factors and b) the community perspective. As information systems become increasingly integrated with human systems these wider influences are more important than ever. Human city interaction concepts including their interplay with cyber-physical systems and social computing are appropriated to healthcare. Through Structuration Theory, insights are given into how healthcare through the human city interaction lens can most effectively be improved.

Paper Nr: 308
Title:

Heidegger, Technology and Sustainability - Between Intentionality, Accountability and Empowerment

Authors:

Angela Lacerda Nobre

Abstract: Transition is the adequate term for characterising contemporary societies. Norms and values are in transit, led by a technological revolution, which is, in itself, the tip of the iceberg of millenary social and cultural changes. Heidegger, one of the leading philosophers of the twentieth century, captured this tension between social change and innovative technology and showed that the Western civilisation was captive of ontological instances whose role was already pin-pointed by Greek Antiquity philosophy but which went underground with Modernity. The product of Heidegger’s work was a revolution in Western thought, which found echoes across all areas of society. Taking Husserl’s call for “back to the things themselves”, Heidegger’s impact has empowered the calls for more sustainable and resilient societies. Sustainability models, with its three pillars of environmental, economic and social sustainability, are directly dependent upon the role of technology and of information science in shaping current patterns of production and consumption in contemporary societies. Industrial, academic and political discourses already voice such taken for granted assumptions. Nevertheless, it is crucial to clarify and to highlight the links between economic evolution and progress, social change and the catalysing role of technology, taken as an enabler of human action.

Area 6 - Enterprise Architecture

Full Papers
Paper Nr: 6
Title:

Data Governance Maturity Model for Micro Financial Organizations in Peru

Authors:

Stephanie Rivera and Nataly Loarte

Abstract: Micro finance organizations play an important role since they facilitate integration of all social classes to sustained economic growth. Against this background, exponential growth of data, resulting from transactions and operations carried out with these companies on a daily basis, becomes imminent. Appropriate management of this data is therefore necessary because, otherwise, it will result in a competitive disadvantage due to the lack of valuable and quality information for decision-making and process improvement. Data Governance provides a different approach to data management, as seen from the perspective of business assets. In this regard, it is necessary that the organization have the ability to assess the extent to which that management is correct or is generating expected results. This paper proposes a data governance maturity model for micro finance organizations, which frames a series of formal requirements and criteria providing an objective diagnosis. This model was implemented based on the information of a Peruvian micro finance organization. Four domains, out of the seven listed in the model, were evaluated. Finally, after validation of the proposed model, it was evidenced that it serves as a means for identifying the gap between data management and objectives set.

Paper Nr: 37
Title:

Software Ecosystems Governance - A Systematic Literature Review and Research Agenda

Authors:

Carina Alves

Abstract: The field of Software ecosystems is a growing discipline that has been investigated from managerial, social, and technological perspectives. The governance of software ecosystems requires a careful balance of control and autonomy given to players. Orchestrators that are able to balance their own interests by bringing joint benefits for other players are likely to create healthy ecosystems. Selecting appropriate governance mechanisms is a key problem involved in the management of proprietary and open source ecosystems. This article summarizes current literature on software ecosystem governance by framing prevalent definitions, classifying governance mechanisms, and proposing a research agenda. We performed a systematic literature review of 63 primary studies. Several studies describe governance mechanisms, which were classified in three categories: value creation, coordination of players, and organizational openness and control. The number of studies indicates that the domain of software ecosystems and their governance is maturing. However, further studies are needed to address central challenges involved on the implementation of appropriate governance mechanisms that can nurture the health of ecosystems. We present a research agenda with several opportunities for researchers and practitioners to explore these issues.

Paper Nr: 62
Title:

A2BP: A Method for Ambidextrous Analysis of Business Process

Authors:

Higor Santos and Carina Alves

Abstract: In recent years, organizations have a growing concern to continually improve their processes and align them to satisfy clients’ expectations, needs and experience. Traditionally, the discipline of Business Process Management (BPM) focuses on ‘inside-out’ improvement of business processes that do not provide appropriate capabilities and techniques to explore ‘outside-in’ opportunities. Design Thinking and Organizational Ambidexterity are approaches that allow a balance between improving internal efficiency, as well as supporting the analysis of the external environment in search of innovation. Inspired by these approaches, our study aims to investigate how to exploit internal problems and explore external opportunities of business processes. The main contribution of this paper is the design of a method called A2BP that systematizes the analysis phase of BPM lifecycle by proposing exploitative and exploratory techniques. We evaluated the A2BP method by means of expert opinion survey and observational case study to assess its usefulness and ease-of-use. Overall, the evaluation of the method was positive and constructive feedback was obtained to further refine the method in future studies.

Paper Nr: 70
Title:

Stack Wars: The Node Awakens

Authors:

Steven Kitzes and Adam Kaplan

Abstract: As the versatility and popularity of cloud technology increases, with storage and compute services reaching unprecedented scale, great scrutiny is now being turned to the performance characteristics of these technologies. Prior studies of cloud system performance analysis have focused on scale-up and scale-out paradigms and the topic of database performance. However, the server-side runtime environments supporting these paradigms have largely escaped the focus of formal benchmarking efforts. This paper documents a performance study intent on benchmarking the potential of the Node.js runtime environment, a rising star among server-side platforms. We herein describe the design, execution, and results of a number of benchmark tests constructed and executed to facilitate direct comparison between Node.js and its most widely-deployed competitor: the LAMP stack. We develop an understanding of the strengths and limitations of these server technologies under concurrent load representative of the computational behaviour of a heavily utilized contemporary web service. In particular, we investigate each server’s ability to handle heavy static file service, remote database interaction, and common compute-bound tasks. Analysis of our results indicates that Node.js outperforms the LAMP stack by a considerable margin in all single-application web service scenarios, and performs as well as LAMP under heterogeneous server workloads.

Paper Nr: 107
Title:

A Semi-automatic Approach to Identify Business Process Elements in Natural Language Texts

Authors:

Renato César Borges Ferreira

Abstract: In organizations, business process modeling is very important to report, understand and automate processes. However, the documentation existent in organizations about such processes is mostly unstructured and difficult to be understood by analysts. The extracting of process models from textual descriptions may contribute to minimize the effort required in process modeling. In this context, this paper proposes a semi-automatic approach to identify process elements in natural language texts, which may include process descriptions. Therefore, based on the study of natural language processing, we defined a set of mapping rules to identify process elements in texts. In addition, we developed a prototype which is able to semi-automatically identify process elements in texts. Our evaluation shows promising results. The analyses of 56 texts revealed 91.92% accuracy and a case study showed that 93.33% of the participants agree with the mapping rules.

Paper Nr: 125
Title:

An Analysis of Strategic Goals and Non-Functional Requirements in Business Process Management

Authors:

Adson Carmo, Marcelo Fantinato, Lucinéia Thom and Edmir Prado

Abstract: Business processes' Non-Functional Requirements (NFR) can foster the strategic alignment in organizations. Our goal was to evaluate to what extent there are approaches that seek to support the modeling of business processes' NFR based on strategic goal-related information. To achieve this goal, we conducted a literature study based on systematic review concepts. As a result, we identified 19 works addressing strategic goals and business processes with NFRs. The most commonly used techniques are: i* and Key Performance Indicators (KPI) for modeling strategic goals and Business Process Model and Notation (BPMN) for modeling business processes. According to our analysis, no approach fully addresses business processes' NFR based on strategic goals which was our primary question in conducting this study.

Paper Nr: 134
Title:

A Semiautomatic Process Model Verification Method based on Process Modeling Guidelines

Authors:

Valter Helmuth Goldberg Júnior, Lucineia Heloisa Thom and José Palazzo Moreira de Oliveira

Abstract: Designing comprehensible process models is a complex task. Process analysts must rely on the experience of expert systems managers to achieve process models with high comprehensibility, also known as pragmatic quality. In the literature, this is portrayed as process modeling guidelines that help modelers to avoid common issues which hinder the comprehension of the process model. In this paper, we propose a method for the semi-automatic verification of business process models according to process modeling guidelines. This method uses the BPMN Ontology and the ontology editor \textit{Protégé} to assist the modeler with validation of the process model's syntax before verifying its pragmatic quality. The validation of the developed method was applied to a collection of 31 process models and the results show that 23 process models of the collection contain at least one guideline violation.

Paper Nr: 148
Title:

A Synthesis of Enterprise Architecture Effectiveness Constructs

Authors:

Siyanda Nkundla-Mgudlwa and Jan C. Mentz

Abstract: Companies throughout the world use Enterprise Architecture (EA) because of benefits such as the alignment of business to Information Technology (IT), centralisation of decision making and cost reductions due to standardisation of business processes and business systems. Even though EA offers organisational benefits, EA projects are reported as being costly, time consuming and require tremendous effort. Companies therefore seek to ascertain ways to measure the effectiveness of EA implementation because of the money and time being spent on EA projects. EA Effectiveness refers to the degree in which EA helps to achieve the collective goals of the organisation and its measurement depends on a list of constructs that can be used to measure the effectiveness of EA implementation. Currently, there exist no comprehensive list of constructs that are suitable to measure the effectiveness of EA implementation. The paper reports on the results of a study that explored the development of a comprehensive list of constructs suitable for measuring the effectiveness of EA implementation. The artefact developed in this research study is called Enterprise Architecture Effectiveness Constructs (EAEC). The EAEC consists of 6 constructs namely: - alignment; communication; governance; scope; top leadership commitment and skilled teams, training and education. To achieve the purpose of this research study, a design science research (DSR) strategy was followed. The EAEC was evaluated in two rounds by EA experts from industry and academia.

Paper Nr: 245
Title:

Paths to IT Performance: A Configurational Analysis of IT Capabilities

Authors:

François Bergeron and Anne-Marie Croteau

Abstract: This study seeks to describe and explain the manner by which the environmental uncertainty, IT strategic orientation and IT capabilities of manufacturing SMEs contribute to their IT performance, that is, to their realization of benefits from the use of IT. A qualitative comparative analysis (QCA) allows to unveil three IT capability configurations associated to high-IT performance firms. Dependent upon the configuration, the core causal conditions involve an IT Defender strategic orientation, and various combinations of IT managerial, functional, informational and technological capabilities. These results support the idea of a gestalt alignment threshold for the IT capabilities of high-IT performance firms, that is, the idea that different IT capability configurations can be equally effective.

Paper Nr: 252
Title:

Outlining a Process to Manage the Complexity of Enterprise Systems Integration

Authors:

Tommi Kähkönen and Kari Smolander

Abstract: New service combinations are constantly needed to be created from the array of information systems and technologies, developed in different times for different purposes, crossing the organizational boundaries. Integration is the key matter in organizations, yet it is also an ambiguous and often a misunderstood concept in the field of information systems. In this paper, we construct an integration process from an inductive study in a large manufacturing enterprise, by examining its long-term ERP development endeavour. The process consists of four sub-processes with dedicated actors and activities. Integration Governance is needed to align Integration Realization with the strategic goals of the organization. Integration Housekeeping is dedicated to standardization activities and keeping the architectural description of the enterprise systems’ landscape updated, and to aid Realization. By utilizing the assets produced by Governance and Housekeeping Integration Evaluation is done to decide whether it is feasible to set up an integration project or abandon the initiative. The process helps managers to manage the complexity of enterprise systems integration and avoid its pitfalls.

Paper Nr: 322
Title:

Generic EA Analysis Framework for the Definition and Automatic Execution of Analyses

Authors:

Melanie Langermeier and Bernhard Bauer

Abstract: Analysis is an essential part in the Enterprise Architecture Management lifecycle. An in-depth consideration of the architecture obtains its strengths and weaknesses. This provides a sound foundation for the future evolution of the architecture as well as for decision-making regarding new projects. Current literature provides a large number of different analysis approaches, targeting different goals and utilizing different techniques. To provide a common interface to analysis activities we studied the corresponding literature in previous research. Based on these results we develop a language for the definition of EA analyses as well as an execution environment for their evaluation. To cope with the high variety of meta models in the EA domain, the framework provides a uniform and tool independent access to analysis activities. Additionally it can be used to provide an EA analysis library, where the architect is able to select predefined analyses according to his specific requirements.

Short Papers
Paper Nr: 19
Title:

Using Process Indicators to Help the Verification of Goal Fulfillment

Authors:

Henrique P. de Sá Sousa, Vanessa T. Nunes and Claudia Cappelli

Abstract: Process modelling is often criticized as lacking proper alignment with business goals. Although there is literature on different proposals to address the issue, the verification of this alignment remains an obstacle during process enactment. We make use of key process indicator (KPI) in a process design method to annotate processes/activities with proper information. The method derives this information from the business goals and uses it to calculate process indicators. We demonstrate through a real example, modelled with the ARIS business process model tool, how the method produces proper indicators, which should be used during process enactment.

Paper Nr: 24
Title:

Business Cloudification - An Enterprise Architecture Perspective

Authors:

Ovidiu Noran and Peter Bernus

Abstract: Cloud computing is emerging as a promising enabler of some aspects of the ‘agile’ and ‘lean’ features that businesses need to display in today’s hyper-competitive and disruptive global economic ecosystem. However, it is increasingly obvious that there are essential prerequisites and caveats to cloudification that businesses need to be aware of in order to avoid pitfalls. This paper aims to present a novel, Enterprise Architecture-based approach towards analysing the cloudification endeavour, adopting a holistic paradigm that takes into account the mutual influences of the entities and artefacts involved, in the context of their life cycles. As shown in the paper, this approach enables a richer insight into the ‘readiness’ of a business considering embarking on a cloudification endeavour and therefore empowers management to evaluate consequences of- and take cognisant decisions on the cloudification extent, type, provider etc. based on prompt information of appropriate quality and detail. The paper also presents a brief practical example of this approach and illustrates, using the Enterprise Architecture viewpoint, the necessity of well-defined business architecture, policies and principles dictating solution selection and design and transition program as sine qua non preconditions towards successful cloudification.

Paper Nr: 27
Title:

Service-oriented Business Model Framework - A Service-dominant Logic based Approach for Business Modeling in the Digital Era

Authors:

Andreas Pfeiffer

Abstract: The business model (BM) concept has been described as an intermediating tool for managing the transition from technology’s potential value into market outcomes. Unfortunately, current business modeling methodologies do not meet specific needs of modeling value (co-)creation in digitally transforming ecosystems (DTE). Based on desktop research and empirical findings this paper proposes a Service-oriented Business Modeling (SoBM) framework to advance the development of market solutions in these environments. Adopting a service-dominant logic’s (S-D logic) perspective a service-centric, network-oriented, and transcending solution proposal is presented. It has been designed to identify and leverage digital technology’s potential value and to improve the conceptualization of value creation and capturing in a digitally connected physical world.

Paper Nr: 55
Title:

Enterprise Architecture beyond the Enterprise - Extended Enterprise Architecture Revisited

Authors:

Torben Tambo

Abstract: As the most enterprises are relying on relations to other enterprises, it is relevant to consider enterprise architecture for inter-organisational relations particularly those relations involving technology. This has been conceptualised as Extended Enterprise Architecture, and a systematic review of this discipline is the topic of this paper. This paper is taking a point of departure in general theories of business-to-business relationships along with inter-organisational information systems, interoperability and business ecosystems. The general theories are applied to the Extended Enterprise Architecture to emphasize paradoxes, problems and potentials in extending EA across organisational boundaries. The purpose of this paper is to review the concept of Extended Enterprise Architecture (EEA) theoretically and empirically to identify viability of Enterprise Architecture (EA) initiatives spanning across organisational boundaries. A case is presented of an enterprise engaging in technology-based business process integration that in turn is explicated as enterprise architecture initiatives with both more and less powerful partners. This paper underlines the necessity to be able to have EA spanning initiatives across multiple enterprises, but a range of problems is illuminated related to (lack of) precision, imbalance, heterogeneity, transformation, temporality, and (operational) maturity. The concept of EEA is seemingly vague, however this paper calls for a strengthen emphasis on redefining general architectural frameworks to embrace EEA in order to handle typical and modern forms of organisational designs relying on virtual and cross-company as cornerstones.

Paper Nr: 129
Title:

EVARES: A Quality-driven Refactoring Method for Business Process Models

Authors:

Wiem Khlif

Abstract: The business performance of an enterprise tightly depends on the quality of its business process model (BPM). This dependence prompted several propositions to improve quality sub-characteristics (e.g. modifiability and reusability) of a BPM through transformation operations to change the internal structure of the model while preserving its external behaviour. Each transformation may improve certain metrics related to one quality sub characteristic while degrading others. Consequently, one challenge of this model transformation-based quality improvement approach is how to identify the application order of the transformations to derive the “best” quality model. This paper proposes a local optimization-based, heuristic method to decide on the application order of the transformations to produce the best quality BPM. The method is guided by both the perspectives, and the impact of each transformation on the quality metrics pertinent to the perspectives as well as the quality sub characteristics of interest to the designer. The method’s and an experimental evaluation are presented.

Paper Nr: 161
Title:

Risk Management Maturity Evaluation Artifact to Enhance Enterprise IT Quality

Authors:

Misael Sousa de Araujo and Edgard Costa Oliveira

Abstract: Information plays a fundamental role throughout an enterprise architecture, figuring as a strategic component to fulfill its business processes. The application of IT Risk Management models is a key success factor to reach organizations goals. However, just by adopting risk management practices is not enough to guarantee the expected benefits. Organizations face a growing need to know how efficient their business processes are, including its risk management processes, so that an efficiency degree can be stated in a determined scale, by knowing existing deficiencies, and to make an improvement plan to raise process quality and to compare its performance with other similar enterprises. Due to the diversity of maturity models and their characteristics, this paper developed a comparative study between the main maturity models of the market, in which it was possible to define, with the help of the decision technique AHP – Analytic Hierarchy Process, the process evaluation model of COBIT 4.1 to measure risk management of IT maturity in modern enterprises.

Paper Nr: 192
Title:

FACIN: The Brazilian Government Enterprise Architecture Framework

Authors:

Vanessa T. Nunes

Abstract: The United Nations point that to raise the efficiency and quality of public services is not just a matter of cutting-edge technologies, but also adopting practices to connect and interoperate governments, which requires a holistic approach. An Enterprise Architecture (EA) initiative is a consistent feature of strategic planning, which assists organizations in understanding how processes/services are automated, as well as helping to reduce organizational complexity and to improve communication, driving organizational change. We describe FACIN, the Brazilian Government EA Framework to support interoperability and digital governance among governmental organizations. FACIN aims to foster intra and inter organizational alignment, and also to provide a basis for the development of methods and best practices to improve the efficiency of public administration and services. In this paper we present the first results and perceptions.

Paper Nr: 298
Title:

Business Model Pattern Execution - A System Dynamics Application

Authors:

María Camila Romero

Abstract: The dynamic aspects of a business model are key to understanding the behavior of the business and the consequences of any change. In spite of the multiple approaches to describe business models, most of them emphasize static elements and leave the dynamic ones to intuition which in turn, diminishes the overall understanding of the model. Among the approaches that explicitly present dynamic elements, we find business model patterns which provide a visual representation of the dynamic behind the business by portraying information, value and money flows. Although the representation delivers adequate insights on the dynamics, it is possible to enhance them by executing the patterns. To visualize the dynamic behavior and the effects of changes and time in any component, we present an approach based on system dynamics.

Paper Nr: 304
Title:

Towards a National Enterprise Architecture Framework in Iran

Authors:

Fereidoon Shams Aliee, Reza Bagheriasl, Amir Mahjoorian and Maziar Mobasheri

Abstract: National Enterprise Architecture (EA) is regarded as a catalyst for achieving e-government goals and many countries have given priority to it in developing their e-government plans. Designing a national EA framework which fits the government’s specific needs facilitates EA planning and implementation for public agencies and boosts the chance of EA success. In this paper, we introduce Iran’s national EA framework (INEAF). The INEAF is designed in order to improve interoperability and deal with EA challenges in Iranian agencies.

Paper Nr: 312
Title:

Highly Scalable Microservice-based Enterprise Architecture for Smart Ecosystems in Hybrid Cloud Environments

Authors:

Daniel Müssig and Robert Stricker

Abstract: Conventional scaling strategies based on general metrics such as technical RAM or CPU measures are not aligned with the business and hence often lack precision flexibility. First, the paper argues that custom metrics for scaling, load balancing and load prediction result in better business-alignment of the scaling behavior as well as cost reduction. Furthermore, due to scaling requirements of structural --non-business-- services, existing authorization patterns such as API-gateways result in inefficient scaling behavior. By introducing a new pattern for authorization processes, the scalability can be optimized. In sum, the changes result in improvements of not only scalability but also availability, robustness and improved security characteristics of the infrastructure. Beyond this, resource optimization and hence cost reduction can be achieved.

Posters
Paper Nr: 3
Title:

Framework for Privacy in Photos and Videos When using Social Media

Authors:

Srinivas Madhisetty and Mary-Anne Williams

Abstract: Privacy is a social construct. Having said that, how can it be contextualised and studied scientifically? This research contributes by investigating how to manage privacy better in the context of sharing and storing photos and videos using social media. Social media such as Facebook, Twitter, WhatsApp and many more applications are becoming popular. The instant sharing of tacit information via photos and videos makes the problem of privacy even more critical.The main problem was, nobody could define the actual meaning of privacy. Though there are definitions about privacy and Acts to protect it, there is no clear consensus as to what it actually means. I asked myself a question, how do I manage something when I don’t know what it means exactly? I then decided to do this research by asking questions about privacy in particular categories of photos so that I could arrive at a general consensus. The data has been processed using the principles of Grounded Theory (GT) to develop a framework which assists in the effective management of privacy in photos and videos.

Paper Nr: 7
Title:

Rules for Validation of Models of Enterprise Architecture - Rules of Checking and Correction of Temporal Inconsistencies among Elements of the Enterprise Architecture

Authors:

Amiraldes Xavier

Abstract: The organizational structure of the elements in an enterprise architecture model is key for decision-making and business transformation. Over time, it is possible that the relationships among the elements of the EA (Enterprise Architecture) become inconsistent in the EA model. To address this problem in this research we specify a set of temporal rules divided into two categories: rules for verification and rules for inconsistencies correction. The specification of these rules is based on the states of the elements of the EA and the concepts presented by the ArchiMate metamodel. The Rules are translated into logical expressions in order to make them easier to implement. This research was developed on the basis of a concrete study of an enterprise architecture management tool, but the solution proposed can be adapted to any EA management tool.

Paper Nr: 48
Title:

Constraint Analysis based on Energetic Reasoning Applied to the Problem of Real Time Scheduling of Workflow Management Systems

Authors:

Flávio Félix Medeiros and Stéphane Julia

Abstract: The objective of this paper is to propose a constraint analysis applied to the problem of real time scheduling in workflow management systems. The adopted model is a p-time Workflow net with a hybrid resource allocation mechanism. The approach considers time constraint propagation techniques for the different types of routings that exist in workflow processes. Different types of resources, discrete and continuous, are then incorporated into the model and an approach based on energetic reasoning is applied. Energetic reasoning can identify unacceptable schedulings due to the energetic inability of the involved resources in the implementation of the related activities. An update of the temporal constraints is then produced in order to eliminate the dates inconsistent with the set of scheduling solutions. Considering the set of modified constraints, a specialized inference mechanism called token player is then applied, which has the purpose of obtain in real time an admissible scenario corresponding to a specific sequence of activities that respects the time constraints.

Paper Nr: 79
Title:

Client-oriented Architecture for e-Business

Authors:

Jose Neto and Celso Hirata

Abstract: In general e-Business solutions focuses in the service provider, they do not consider companies or large client interest. The current web service business models do not allow consumers either to request for unavailable services nor allow to request their services needs met by a complex arrangement of providers. In this paper we propose a novel approach for making service compositions based on customer demand to meet business needs in an agile way. The business needs are related to the flexibility of the client to express its needs. The agility is related to the capability of the supplier to implement services in timely manner. To achieve the goal, we use as support the business forms and the e-Contract lifecycle methodology to discipline and facilitate the provider and client responsibilities.

Paper Nr: 126
Title:

The Quest for Underpinning Theory of Enterprise Architecture - General Systems Theory

Authors:

Nestori Syynimaa

Abstract: Enterprise architecture originates from the 1980’s. It emerged among ICT practitioners to solve complex problems related to information systems. Currently EA is also utilised to solve business problems, although the focus is still in ICT and its alignment with business. EA can be defined as a description of the current and future states of the enterprise, and as a change between these states to meet stakeholder’s goals. Despite its popularity and 30 years of age, the literature review conducted on top information and management science journals revealed that EA is still lacking the sound theoretical foundation. In this conceptual paper, we propose General Systems Theory (GST) for underpinning theory of EA. GST allows us to see enterprises as systems of systems consisting of, for instance, social organisations, humans, information systems and computers. This explains why EA can be used to describe the enterprise and its components, and how to control them to execute the managed change. Implications to science and practice, and some directions for future research are also provided.

Paper Nr: 141
Title:

ICT Governance, Risks and Compliance - A Systematic Quasi-review

Authors:

Claudio Junior Nascimento da Silva

Abstract: The present study aims to conduct a quasi-systematic review in a structured way to identify, evaluate and summarize the main evidence on Governance, Risk Management and Compliance in the area of Information Technology and Communication (ICT) of companies. The objective is to analyze the existing methods and / or techniques, characterizing their application in an ICT environment so as to enable the reader to be assisted through a secondary study. Thus, a research question was adopted to guide the quasi-systematic review that conducted an initial study of 47 articles, among which 18 were selected for the construction of this work through a selection that included ICT Governance, Risk Management and Compliance.

Paper Nr: 316
Title:

Enterprise Architecture - An Approach to Promote Traceability and Synchronization of Computational Models

Authors:

José Rogério Poggio Moreira and Rita Suzana Pitangueira Maciel

Abstract: In the context of Enterprise Architecture (EA) modeling, the lack of alignment between the computational models from the strategic level to the operational level of Information Technology (IT), focusing on information systems, constitutes an organizational problem. The root cause of this problem is the low capacity of traceability and lack of synchronization between the computational models present in the EA. Among the negative impacts related to this problem are, for example, the obsolescence of models, the difficulty of carrying out impact analyzes and making decisions, in scenarios of organizational changes. This paper aims to propose an approach to enable the traceability and synchronization between the computational models of enterprise architecture. It is hoped to provide a greater alignment, understanding and adaptation of the organizational environment, from the strategic level to the operational level of IT.

Workshop

AEM 2017

Full Papers
Paper Nr: 1
Title:

Demonstrating Approach Design Principles during the Development of a DEMO-based Enterprise Engineering Approach

Authors:

Thomas van der Meulen

Abstract: Enterprise engineering (EE) aims to address several phenomena in the evolution of an enterprise. One prominent phenomenon is the inability of the enterprise as a complex socio-technical system to adapt to rapidly-changing environments. In response to this phenomenon, many enterprise design approaches (with their own methodologies, frameworks, and modelling languages) emerged, but with little empirical evidence about their effectiveness. Furthermore, research indicates that multiple enterprise design approaches are used concurrently in industry, with each approach focusing on a sub-set of stakeholder concerns. The proliferating design approaches do not necessarily explicate their conditional use in terms of contextual prerequisites and demarcated design scope; and this also impairs their evaluation. Previous work suggested eleven design principles that would guide approach designers when they design or enhance an enterprise design approach. The design principles ensure that researchers contribute to the systematic growth of the EE knowledge base. This article provides a demonstration of the eleven principles during the development of a DEMO-based enterprise engineering approach, as well as a discussion to reflect on the usefulness of the principles.

Paper Nr: 2
Title:

Service-oriented Architecture: Describing Benefits from an Organisational and Enterprise Architecture Perspective

Authors:

Hatitye Chindove

Abstract: Software architecture models describe the technical structure, constraints, and characteristics of software components and the interfaces between them. Service-Oriented Architecture (SOA) is a recent software architecture style with many benefits if used in the right context. Business agility, customer satisfaction, faster time to market, ease of partnering and lower business costs are some promised benefits. Yet SOA has not always benefitted organisations. One reason given is a misunderstanding of the relationship between SOA and enterprise architecture (EA). Therefore, this study in a large retail organisation in South Africa describes SOA benefits and classifies them into the various EA domains. SOA benefits are also classified into six broad categories namely: strategic, organisational, operational, managerial, maintenance and governance. The study comprises three cases from one organisation that deployed different architectures. SOA benefits are contrasted with benefits from other approaches. Organisational benefits not described before include greater collaboration amongst SOA participants enabling better learning opportunities. The results should assist IT management in preparing SOA business cases and in managing SOA deployments to ensure benefits are achieved.

Paper Nr: 3
Title:

Towards a Conceptual Model for an e-Government Interoperability Framework for South Africa

Authors:

Paula Kotzé and Ronell Alberts

Abstract: In September 2016, the South African Government published a White Paper on the National Integrated ICT Policy highlighting some principles for e-Government and the development of a detailed integrated national e-Government strategy and roadmap. This paper focuses on one of the elements of such a strategy, namely the delivery infrastructure principles identified, and addresses the identified need for centralised coordination to ensure interoperability. The paper proposes a baseline conceptual model for an e-Government interoperability framework (e-GIF) for South Africa. The development of the model considered best practices and lessons learnt from previous national and international attempts to design and develop national and regional e-GIFs, with cognisance of the South African legislation and technical, social and political environments. The conceptual model is an enterprise model on an abstraction level suitable for strategic planning.

Paper Nr: 4
Title:

A Growth State Transition Model as Driver for Business Process Management in Small Medium Enterprises

Authors:

Dina Jacobs

Abstract: A key constraint for growing small and medium enterprises (SMEs) in South Africa is the business skills required to grow the enterprises through the stages of transformation. Business process management (BPM) is one of the skills that could add value during transformation. Understanding the stages of transformation during SME growth would assist to position BPM as an instrument of value for SMEs. These stages of SME growth are typically defined as part of the SME growth stage models. However, criticism against SME growth stage models is of concern. In this article, we propose the 5S SME Growth State Transition Model in order to counteract some of the criticisms. The value contribution of the Model lies in defining typical states associated with SME growth that can be used as input in research to position BPM as management approach during SME growth.

Paper Nr: 6
Title:

DSML4PTM - A Domain-Specific Modelling Language for Patient Transferal Management

Authors:

Emanuele Laurenzi, Knut Hinkelmann, Ulrich Reimer and Alta van der Merwe

Abstract: This paper presents a domain-specific modelling language for patient transferal management (DSML4PTM). To foster reusability within the modelling community, existing modelling languages were taken into account as far as possible and then extended as was needed by the application domain. The language was developed through iteration following the design science research methodology. For requirements elicitation purposes domain expertise and healthcare standards were taken into account. The new modelling language was evaluated first with respect to the elicited requirements and then through the creation of two models reflecting a reference process and an application scenario. Next, an evaluation on the perceived usefulness and cognitive effort of the language was performed using a focus group with modelling and domain experts.

Paper Nr: 7
Title:

Managing and Controlling Decentralized Corporate Energy Systems - Transferring Best-practice Methods to the Energy Domain

Authors:

Christine Koppenhoefer

Abstract: Managing decentralized corporate energy systems is a challenging task for enterprises. However, the integration of energy objectives into business strategy creates difficulties resulting in inefficient decisions. To improve this, practice-proven methods such as the Balanced Scorecard and Enterprise Architecture Management are transferred to the energy domain. The methods are evaluated based on a case study. Managing multi-dimensionality and high complexity are the main drivers for an effective and efficient energy management system. Both methods show a positive impact on managing decentralized corporate energy systems and are adaptable to the energy domain.

Short Papers
Paper Nr: 5
Title:

Towards an Understanding of the Connected Mobility Ecosystem from a German Perspective

Authors:

Anne Faber

Abstract: This paper presents a model of the connected mobility ecosystem, which contains a description of the associated industry. Although connected mobility is a topic of global relevance and interest, we analyzed the ecosystem from a Germany perspective due to Germany’s strong history of automotive OEMs and suppliers. To gain a better understanding of the mobility ecosystem, we introduced a modified ego network visualization focusing on mobility services. This visualization guarantees an user-centred design analysis of the ecosystem and enables stakeholders to identify companies that are highly contributing in providing these services and rather passive contributors. Additionally, it allows ecosystem stakeholders to understand the complex collaborations of companies in providing mobility services. We plan to continue our work focusing on mobility scenarios addressing the needs and demands of mobility consumers.