ICEIS 2021 Abstracts


Area 1 - Databases and Information Systems Integration

Full Papers
Paper Nr: 27
Title:

Proposal of an Implementation Process for the Brazilian General Data Protection Law (LGPD)

Authors:

Edna D. Canedo, Anderson J. Cerqueira, Rogério M. Gravina, Vanessa C. Ribeiro, Renato Camões, Vinicius D. Reis, Fábio L. Mendonça and Rafael D. Sousa Jr.

Abstract: The increasing number of online users yields to a correlated increase in the number of varied personal data collection devices. As a result, it became necessary to create and regulate new personal data policies which define the rights and duties of public and private organizations and users. As occurred in other countries, the Brazilian General Data Protection Law (LGPD) was created to define the nationwide rules regarding the privacy of users’ data. In this paper, we present the proposal for a LGPD implementation process, using the Business Process Modeling Notation (BPMN). This proposal is intended to allow the Brazilian Federal Public Administration (FPA) Agencies to perform the steps to implement the LGPD in an easier and more targeted way, resulting in increased privacy of personal data. The proposal also defines new roles and responsibilities within FPA Agencies to enable these Agencies for providing clarifications to complaints about personal data, receiving communications from the National Data Protection Authority (ANPD) and adopting measures, guiding employees in relation to rules, regulations and data protection laws.
Download

Paper Nr: 48
Title:

Taxi Service Simulation: A Case Study in the City of Santa Maria with Regard to Demand and Drivers Income

Authors:

Andre Brizzi and Marcia Pasin

Abstract: Taxi is an already well-established service in many cities around the world. Nowadays, the service request is mainly made through mobile applications, where the user selects the desired options, including the payment method. An information system, aware of the location of taxis, associates the closest taxi to the customer request. In general, taxi allows the user to enjoy the mobility service without being directly charged by the vehicle maintenance. The vehicle owner, who can be a company or a self-employed person (frequently the taxi driver), is the one in charge with vehicle maintenance, fuel payment, etc. However, recently the taxi service has lost much of its appeal due to competing car sharing services. Thus, it is necessary to evaluate more carefully the implementation and maintenance of taxi service in cities with regard to the drivers income. This work contributes in this way. Here, the taxi service is considered, with a standardized vehicle fleet but using different vehicle types (electric, ethanol, gasoline and CNG). Given a demand and costs, a simulation is proposed to detail and evaluate the appropriate balance between drivers income and demand scheme to keep the service viable to the drivers. Simulation was performed in a real scenario, the city of Santa Maria in Southern Brazil. Input values in the simulation scenario (fuel price, demand, etc.) were chosen, based on literature, city hall documentation and Internet news, to make the simulation as realistic as possible. Simulation results shown that for feasible taxi service, the city town hall must define a maximum number of taxi licenses. The vehicle type has a large impact in the taxi driver’s profit. Electric vehicles have a lower cost per km driven, but still have high cost of acquisition. Finally, if the daily traveled distance increases, the difference between electric vehicles and the others decreases, making it possible electric vehicles to become more advantageous.
Download

Paper Nr: 61
Title:

HOUDAL: A Data Lake Implemented for Public Housing

Authors:

Etienne Scholly, Cécile Favre, Eric Ferey and Sabine Loudcher

Abstract: Like all areas of economic activity, public housing is impacted by the rise of big data. While Business Intelligence and Data Science analyses are more or less mastered by social landlords, combining them inside a shared environment is still a challenge. Moreover, processing big data, such as geographical open data that sometimes exceed the capacity of traditional tools, raises a second issue. To face these problems, we propose to use a data lake, a system in which data of any type can be stored and from which various analyses can be performed. In this paper, we present a real use case on public housing that fueled our motivation to introduce a data lake. We also propose a data lake framework that is versatile enough to meet the challenges induced by the use case. Finally, we present HOUDAL, an implementation of a data lake based on our framework, which is operational and used by a social landlord.
Download

Paper Nr: 82
Title:

Identifying Suspects on Social Networks: An Approach based on Non-structured and Non-labeled Data

Authors:

Érick S. Florentino, Ronaldo R. Goldschmidt and Maria C. Cavalcanti

Abstract: The identification of suspects of committing virtual crimes (e.g., pedophilia, terrorism, bullying, among others) has become one of the tasks of high relevance when it comes to social network analysis. Most of the time, analysis methods use the supervised machine learning (SML) approach, which requires a previously labeled set of data, i.e., having identified in the network, the users who are and who are not suspects. From such a labeled network data, some SML algorithm generates a model capable of identifying new suspects. However, in practice, when analyzing a social network, one does not know previously who the suspects are (i.e., labeled data are rare and difficult to obtain in this context). Furthermore, social networks have a very dynamic nature, varying significantly, which demands the model to be frequently updated with recent data. Thus, this work presents a method for identifying suspects based on messages and a controlled vocabulary composed of suspicious terms and their categories, according to a given domain. Different from the SML algorithms, the proposed method does not demand labeled data. Instead, it analyzes the messages exchanged on a given social network, and scores people according to the occurrence of the vocabulary terms. It is worth to highlight the endurance aspect of the proposed method since a controlled vocabulary is quite stable and evolves slowly. Moreover, the method was implemented for Portuguese texts and was applied to the “PAN-2012-BR” data set, showing some promising results in the pedophilia domain.
Download

Paper Nr: 111
Title:

FakeWhastApp.BR: NLP and Machine Learning Techniques for Misinformation Detection in Brazilian Portuguese WhatsApp Messages

Authors:

Lucas Cabral, José M. Monteiro, José W. Franco da Silva, César L. Mattos and Pedro C. Mourão

Abstract: In the past few years, the large-scale dissemination of misinformation through social media has become a critical issue, harming the trustworthiness of legit information, social stability, democracy and public health. Thus, developing automated misinformation detection methods has become a field of high interests both in academia and in industry. In many developing countries such as Brazil, India, and Mexico, one of the primary sources of misinformation is the messaging application WhatsApp. Despite this scenario, due to the private messaging nature of WhatsApp, there still few methods of misinformation detection developed specifically for this platform. In this work we present the FakeWhatsApp.BR, a dataset of WhatsApp messages in Brazilian Portuguese, collected from Brazilian public groups and manually labeled. Besides, we evaluated a series of misinformation classifiers combining Natural Language Processing-based techniques of feature extraction and a set of well-know machine learning algorithms, totaling 108 different scenarios. Our best result achieved a F1 score of 0.73, and the analysis of errors indicates that they occur mainly due to the predominance of short texts that accompany media files. When texts with less than 50 words are filtered, the F1 score rises to 0.87.
Download

Paper Nr: 123
Title:

A Novel Method for Object Detection using Deep Learning and CAD Models

Authors:

Igor B. Sampaio, Luigy Machaca, José Viterbo and Joris Guérin

Abstract: Object Detection (OD) is an important computer vision problem for industry, which can be used for quality control in the production lines, among other applications. Recently, Deep Learning (DL) methods have enabled practitioners to train OD models performing well on complex real world images. However, the adoption of these models in industry is still limited by the difficulty and the significant cost of collecting high quality training datasets. On the other hand, when applying OD to the context of production lines, CAD models of the objects to be detected are often available. In this paper, we introduce a fully automated method that uses a CAD model of an object and returns a fully trained OD model for detecting this object. To do this, we created a Blender script that generates realistic labeled datasets of images containing the object, which are then used for training the OD model. The method is validated experimentally on two practical examples, showing that this approach can generate OD models performing well on real images, while being trained only on synthetic images. The proposed method has potential to facilitate the adoption of object detection models in industry as it is easy to adapt for new objects and highly flexible. Hence, it can result in significant costs reduction, gains in productivity and improved products quality.
Download

Paper Nr: 152
Title:

A Domain Specific Language to Provide Middleware for Interoperability among SaaS and DaaS/DBaaS through a Metamodel Approach

Authors:

Babacar Mane, Ana P. Magalhaes, Gustavo Quinteiro, Rita P. Maciel and Daniela B. Claro

Abstract: Cloud Computing (CC) is a paradigm that manages a pool of virtualized resources at infrastructure, platform, and software levels to deliver them as services over the Internet. Cloud Platforms are heterogeneous, and therefore cloud users may face interoperability and integration issues regarding consumption, provisioning, management, and supervision resources among distinct clouds. Due to the lack of standards in such a heterogeneous environment, an organization may face a lock-in situation. A middleware can minimize the effort to overcome lock-in problems. The MIDAS middleware ensures semantic interoperability between Software as a Service (SaaS) and Data as a Service (DaaS), and at the same times provides data integration between DaaS. Currently, MIDAS runtime implementations rely on Cloud Foundry, Amazon Web Services, OpenShift and, Heroku providers. To avoid ambiguity in MIDAS development and deployment an unambiguous definition of MIDAS architectural concepts must be provided. Thus, our work presents a Domain-Specific Modeling Language (DSML) comprising a metamodel of MIDAS semantic architecture and a Unified Modeling Language (UML) profile. To evaluate the DSML expressiveness, we instantiate several middleware models, and the findings demonstrate that our modeling language has an acceptable level of concepts to specify the middleware.
Download

Paper Nr: 195
Title:

A Model for Implementing Enterprise Resource Planning Systems in Small and Medium-sized Enterprises

Authors:

Daniela Tapia, Paola Vintimilla, Ximena Alvarez, Juan Llivisaca, Mario Peña, Rodrigo Guamán, Lorena Siguenza-Guzman and Diana Jadan-Aviles

Abstract: Small and medium-sized enterprises (SMEs) are considered dynamic agents within the business environment. Currently, SMEs have great potential for strong growth and great profit. However, their growth is restricted by the lack of systems that would allow integrating their data and activities. One possible solution is the implementation of Enterprise Resource Planning (ERP) systems to increase the company’s level of efficiency, effectiveness, and productivity. However, implementation processes require investing resources and bring certain problems, e.g., the difficulty to fully adapt to the organization’s accounting and management procedures, and lack of experience of end-users in handling ERP systems. The aim of this study is focused on constructing a model for successfully implementing ERP systems into SMEs. This model used a group of critical success factors (CSF) to analyze empirical evidence in organizations. To its development, the interpretive structural modeling methodology was used, and it was validated in a focus group of experts in implementing and using ERP systems. The results show that the model is adequate for a successful implementation in SMEs engaged in sales, production, or service activities.
Download

Paper Nr: 209
Title:

Managing Evolution of Heterogeneous Data Sources of a Data Warehouse

Authors:

Darja Solodovnikova, Laila Niedrite and Lauma Svilpe

Abstract: The evolution of heterogeneous integrated data sources has become a topical issue as data nowadays is very diverse and dynamic. For this reason, novel methods are necessary to collect, store and analyze data from various data sources as efficiently as possible, while also handling changes in data structures that have occurred as a result of evolution. In this paper, we propose a solution to problems caused by evolution of integrated data sources and information requirements for the data analysis system. Our solution incorporates an architecture that allows to perform OLAP operations and other kinds of analysis on integrated big data and a mechanism for detecting and automatically or semi-automatically propagating changes occured in heterogeneous data sources to a data warehouse.
Download

Paper Nr: 234
Title:

Tracking and Tracing of Global Supply Chain Network: Case Study from a Finnish Company

Authors:

Ahm Shamsuzzoha, Michael Ehrs, Richard Addo-Tengkorang and Petri Helo

Abstract: Supply chain and logistics network tracking and tracing is an essential need in global supply and logistics network. Existing technologies are mostly suitable for single channel supply chain and are not suitable for multi-channel supply network. The objective of this research study is therefore to outline technological knowhow and possibilities related to tracking and tracking items within distributed supply chain and logistics network. This research has focused to implement a novel tracking system applicable for total supply network both inbound and outbound shipments. This study is validated within the boundary of how the available tracking technologies can be useful for a Finnish case company to manage its global supply and delivery network. Both hybrid and cloud enable online-based tracking systems are proposed in this research. The application of the proposed tracking technologies provides the case company with real-time visibility on its current logistics assets.
Download

Paper Nr: 236
Title:

Automatic Construction of Benchmarks for RDF Keyword Search Systems Evaluation

Authors:

Angelo B. Neves, Luiz P. Leme, Yenier T. Izquierdo and Marco A. Casanova

Abstract: Keyword search systems provide users with a friendly alternative to access Resource Description Framework (RDF) datasets. The evaluation of such systems requires adequate benchmarks, consisting of RDF datasets and keyword queries, with their correct answers. However, the sets of correct answers such benchmarks provide for each query are often incomplete, mostly because they are manually built with experts’ help. The central contribution of this paper is an offline method that helps build RDF keyword search benchmarks automatically, leading to more complete sets of correct answers, called solution generators. The paper focuses on computing sets of generators and describes heuristics that circumvent the combinatorial nature of the problem. The paper then describes five benchmarks, constructed with the proposed method and based on three real datasets, DBpedia, IMDb, and Mondial, and two synthetic datasets, LUBM and BSBM. Finally, the paper compares the constructed benchmarks with keyword search benchmarks published in the literature.
Download

Paper Nr: 242
Title:

Towards a Methodological Approach for the Definition of a Blockchain Network for Industry 4.0

Authors:

Charles B. Garrocho, Karine N. Oliveira, Carlos C. Cavalcanti and Ricardo R. Oliveira

Abstract: The Industrial Internet of Things is expected to attract significant investments for Industry 4.0. In this new environment, blockchain has immediate potential in industrial applications, providing immutable, traceable and auditable communication. Blockchain gained prominence in the academy, being developed and evaluated in several application areas. However, no study has presented methodologies to definition blockchain networks in the industrial environment. To fill this gap, we present a methodology that presents paths to follow and important aspects to be analyzed for the definition and deployment of a blockchain network architecture. This methodology can help in the appropriate choice of platforms and parameters of blockchain networks, resulting in a reduction of costs for the factory and safety in meeting deadlines for industrial processes.
Download

Paper Nr: 248
Title:

Entity Resolution in Large Patent Databases: An Optimization Approach

Authors:

Emiel Caron and Ekaterini Ioannou

Abstract: Entity resolution in databases focuses on detecting and merging entities that refer to the same real-world object. Collective resolution is among the most prominent mechanisms suggested to address this challenge since the resolution decisions are not made independently, but are based on the available relationships within the data. In this paper, we introduce a novel resolution approach that combines the essence of collective resolution with rules and transformations among entity attributes and values. We illustrate how the approach’s parameters are optimized based on a global optimization algorithm, i.e., simulated annealing, and explain how this optimization is performed using a small training set. The quality of the approach is verified through an extensive experimental evaluation with 40M real-world scientific entities from the Patstat database.
Download

Short Papers
Paper Nr: 19
Title:

Machine Learning within a Graph Database: A Case Study on Link Prediction for Scholarly Data

Authors:

Sepideh S. Sobhgol, Gabriel C. Durand, Lutz Rauchhaupt and Gunter Saake

Abstract: In the combination of data management and ML tools, a common problem is that ML frameworks might require moving the data outside of their traditional storage (i.e. databases), for model building. In such scenarios, it could be more effective to adopt some in-database statistical functionalities (Cohen et al., 2009). Such functionalities have received attention for relational databases, but unfortunately for graph-based database systems there are insufficient studies to guide users, either by clarifying the roles of the database or the pain points that require attention. In this paper we make an early feasibility consideration of such processing for a graph domain, prototyping on a state-of-the-art graph database (Neo4j) an in-database ML-driven case study on link prediction. We identify a general series of steps and a common-sense approach for database support. We find limited differences in most steps for the processing setups, suggesting a need for further evaluation. We identify bulk feature calculation as the most time consuming task, at both the model building and inference stages, and hence we define it as a focus area for improving how graph databases support ML workloads.
Download

Paper Nr: 20
Title:

Open Data Integration in 3D CityGML-based Models Generation

Authors:

Mcdonnell A. Maieron and José P. Moreira de Oliveira

Abstract: Facing the increasing complexity of large urban centers caused by population growth and the dynamic nature of cities, their managers seek to optimize services and infrastructures in terms of scalability, environment, and security to adapt to demand, making their cities smarter. Therefore, these new modern centers’ administrators should apply smart governance techniques to manage the physical and data infrastructure and seek alignment with the global open data initiative. As a point of intersection between physical and data infrastructure, 3D models of cities have been playing an important role in people’s daily lives, being a fundamental element for several applications. In this context, CityGML, a semantic model for 3D data representation adopted by several cities, appears as a possible solution for modeling. This paper presents an approach of integrating open data in the semi-automatic generation of 3D models based on CityGML, “enriching” semantic information about the instances with the association with the OpenStreetMaps database. A case study was performed using data provided by the Municipality of Porto Alegre, BR. The model generated in CityGML goes through semantic, geometric, and schema level validations, proving the proposed approach’s feasibility.
Download

Paper Nr: 25
Title:

An Effective Intrusion Detection Model based on Random Forest Algorithm with I-SMOTE

Authors:

Weijinxia, Longchun, Wanwei, Zhaojing, Duguanyao and Yangfan

Abstract: With the wide applications of network in our daily lives, network security is becoming increasing prominent. Intrusion detection systems have been widely used to detect various types of malicious network which cannot be detected by a conventional firewall. Therefore, various machine-learning techniques have been proposed to improve the performance of intrusion detection system. However, the balance of different data classes is critical and will affect detection performance. In order to reduce the impact of class imbalance of the intrusion dataset, this paper proposes a scheme that applies the improved synthetic minority oversampling technique (I-SMOTE) to balance the dataset, employs correlation analysis and random forest to reduce features and uses the random forest algorithm to train the classifier for detection. The experimental results based on the NSL-KDD dataset show that it achieves a better and more robust performance in terms of accuracy, detection rate, false alarms and training speed.
Download

Paper Nr: 28
Title:

Activity based Traffic Indicator System for Monitoring the COVID-19 Pandemic

Authors:

Justin Junsay, Aaron J. Lebumfacil, Ivan G. Tarun and William E. Yu

Abstract: This study describes an activity based traffic indicator system to provide information for COVID-19 pandemic management. The activity based traffic indicator system does this by utilizing a social probability model based on the birthday paradox to determine the exposure risk, the probability of meeting someone infected (PoMSI). COVID-19 data, particularly the 7-day moving average of the daily growth rate of cases (7-DMA of DGR) and cumulative confirmed cases of next week covering a period from April to September 2020, were then used to test PoMSI using Pearson correlation to verify whether it can be used as a factor for the indicator. While there is no correlation for the 7-DMA of DGR, PoMSI is strongly correlated (0.671 to 0.996) with the cumulative confirmed cases and it can be said that as the cases continuously rise, the probability of meeting someone COVID positive will also be higher. This shows that indicator not only shows the current exposure risk of certain activities but it also has a predictive nature since it correlates to cumulative confirmed cases of next week and can be used to anticipate the values of confirmed cumulative cases. This information can then be used for pandemic management.
Download

Paper Nr: 76
Title:

A Comparison between Textual Similarity Functions Applied to a Query Library for Inspectors of Extractive Industries

Authors:

Junior Zilles, Jonata C. Wieczynski, Giancarlo Lucca and Eduardo N. Borges

Abstract: The mineral extraction industries, including oil and gas extraction, use a series of equipment that requires several inspections before they are ready to be used. These inspections should only be carried by certified inspectors, since the equipment works under a lot of pressure and deals with toxic and dangerous materials for both human life and the environment, increasing the risk of accidents. In order to facilitate the search for qualified professionals with different certifying techniques, this article presents the construction of a RESTful API that implements a Web service for querying by inspectors, using textual similarity. The experiments included 48,374 inspectors, containing 74,134 certifications and 85,512 techniques. We evaluated the performance and quality of the system using a set of distinct similarity functions.
Download

Paper Nr: 122
Title:

Empirical Evaluation of a Textual Approach to Database Design in a Modeling Tool

Authors:

Jonnathan Lopes, Maicon Bernardino, Fábio Basso and Elder Rodrigues

Abstract: This article reports an empirical assessment study conducted with 27 subjects intended to verify the effort (time spent), precision, recall, F-Measure of a proposed tool based on a textual approach (ERtext), thereby a study developed for comparative reasons of ERtext with a tool based on graphical approach: thebrModelo. The results are: 1) less effort is associated with the graphical approach and that ERtext, and 2) regarding the model quality, brModelo have similar performances in both approaches. Since the results shows no considerable statistical differences among the two design approaches, we conclude that the usage of a textual approach is feasible, thus ERtext is a good tool alternative in the context of conceptual database modeling.
Download

Paper Nr: 133
Title:

Application of Classification and Word Embedding Techniques to Evaluate Tourists’ Hotel-revisit Intention

Authors:

Evripides Christodoulou, Andreas Gregoriades, Maria Pampaka and Herodotos Herodotou

Abstract: Revisit intention is a key indicator for future business performance in the hospitality industry. This work focuses on the identification of patterns from user-generated data explaining the reasons why tourist may revisit a hotel they stayed at during their holidays and aims to identify differences among two classes of hotels (4-5 star and 2-3 star). The method utilises data from TripAdvisor retrieved using a scrapper application. Topic modelling is initially performed to identify the main themes discussed in each tourist review. Subsequently, reviews are labelled depending on whether they mention the intention of their author to revisit the hotel in the future using an ontology of revisit-intention generated using Word2Vec word embedding. The identified topics from the labelled reviews are utilised to train an Extreme Gradient Boosting model (XGBoost) to predict revisit intention, which is then used to identify topic-patterns in reviews that relate to revisit intention. The learned model achieved satisfactory performance and was used to identify the most influential topics related to revisit intention using an explainable machine learning technique to illustrate visually the rules embedded in the learned XGBoost model. The method is applied on reviews from tourists that visited Cyprus between 2009-2019. Results highlight that staff professionalism (e.g., politeness, smile) is critical for both classes of hotels; however, its effect is smaller on 2-3 start hotels where cleanliness has greater influence on revisiting.
Download

Paper Nr: 136
Title:

On Functional Requirements for Keyword-based Query over Heterogeneous Databases on the Web

Authors:

Plinio S. Leitão-Junior, Fábio Nogueira de Lucena, Mariana S. Ramada, Leonardo A. Ribeiro and João Carlos da Silva

Abstract: Context. A large amount of data is made available daily on the Web, but many databases cannot be accessed by conventional search engines, as they require proper access methods and specialised knowledge through their access languages. Focus. In the scenario of non-expert users to access databases, multiple database categories, and plural idioms, this work analyzes the functional requirements that need to be considered for keyword queries processing over data sources on the Web. The problem is still open and involves challenges such as query interpretation and access to databases. Method. The investigation is centered on the problem itself, which is portrayed by a set of functional issues, which together represent the challenges linked to the research field. Approach. This work introduces and systematically analyzes the functional requirements to the problem scope. Issues reported in the literature are refined and evolved to support the modeling of the problem views: functional responsibilities and their interactions by messaging between problem objects. Conclusions and Results. This paper contributes to characterize the problem, makes clearer its understanding and promotes the development of keyword-based query processing systems. A software engineering artifact is used to model the problem and make it more formal and precise. Further studies will refine such requirements and build (specialise) artifacts tailored to the solution space.
Download

Paper Nr: 137
Title:

Visual Analysis Tool for a BLE Technology based Tracking Data

Authors:

Flavia A. Schneider, Adriano Branco, Ariane B. Rodrigues, Felipe Carvalho, Simone J. Barbosa, Markus Endler and Hélio Lopes

Abstract: Several systems deal with human mobility. Most of them are for outdoor environments and use mobile phones to capture data. However, there is a growing interest of enterprises to consider indoor movement to take employees and client classes into account. Moreover, they usually want to assign semantics to the visited locations. We propose a visual exploration tool for analyzing the dynamics of individual movements in an indoor environment in this work. We present the use of suitable charts and animations to explore these complex data better. Finally, we argue that one could use our solution to monitor social distancing in indoor environments, which is a sensible thing during the current COVID-19 pandemic.
Download

Paper Nr: 157
Title:

A Practical Guide to Support Predictive Tasks in Data Science

Authors:

José C. Filho, José M. Monteiro, César L. Mattos and Juvêncio S. Nobre

Abstract: Currently, professionals from the most diverse areas of knowledge need to explore their data repositories in order to extract knowledge and create new products or services. Several tools have been proposed in order to facilitate the tasks involved in the Data Science lifecycle. However, such tools require their users to have specific (and deep) knowledge in different areas of Computing and Statistics, making their use practically unfeasible for non-specialist professionals in data science. In this paper, we propose a guideline to support predictive tasks in data science. In addition to being useful for non-experts in Data Science, the proposed guideline can support data scientists, data engineers or programmers which are starting to deal with predictive tasks. Besides, we present a tool, called DSAdvisor, which follows the stages of the proposed guideline. DSAdvisor aims to encourage non-expert users to build machine learning models to solve predictive tasks, extracting knowledge from their own data repositories. More specifically, DSAdvisor guides these professionals in predictive tasks involving regression and classification.
Download

Paper Nr: 175
Title:

Facial Expression Recognition System for Stress Detection with Deep Learning

Authors:

José Almeida and Fátima Rodrigues

Abstract: Stress is the body's natural reaction to external and internal stimuli. Despite being something natural, prolonged exposure to stressors can contribute to serious health problems. These reactions are reflected not only physiologically, but also psychologically, translating into emotions and facial expressions. Based on this, we developed a proof of concept for a stress detector. With a convolutional neural network capable of classifying facial expressions, and an application that uses this model to classify real-time images of the user's face and thereby assess the presence of signs of stress. For the creation of the classification model was used transfer learning together with fine-tuning. In this way, we took advantage of the pre-trained networks VGG16, VGG19, and Inception-ResNet V2 to solve the problem at hand. For the transfer learning process two classifier architectures were considered. After several experiments, it was determined that VGG16, together with a classifier based on a convolutional layer, was the candidate with the best performance at classifying stressful emotions. The results obtained are very promising and the proposed stress-detection system is non-invasive, only requiring a webcam to monitor the user's facial expressions.
Download

Paper Nr: 178
Title:

An Applied Risk Identification Approach in the ICT Governance and Management Macroprocesses of a Brazilian Federal Government Agency

Authors:

Edna D. Canedo, Ana P. Morais do Vale, Rogério M. Gravina, Rafael L. Patrão, Leomar Camargo de Souza, Vinicius D. Reis, Fábio L. Mendonça and Rafael D. Sousa Jr.

Abstract: Risk management is of great importance, both in the risk management of private organizations and in public administration organizations. Thus, in order to guarantee effective risk management and properly aligned with the organizational objectives, it is necessary to map and continuously evaluate the possible risks that may impact the organization’s service provision. This work presents the identification of the risks of the Macroprocesses of Management and Governance of Information and Communication Technology (ICT) of a federal public administration agency. The identification and classification of risks were carried out using the integrity and risk management support system (AGIR). The classification of ICT risks carried out will support stakeholders in decision making, allowing for a better assessment and quality of ICT services provided by the organization to its users.
Download

Paper Nr: 182
Title:

Evaluating the Use of the Open Trip Model for Process Mining: An Informal Conceptual Mapping Study in Logistics

Authors:

Jean S. Piest, Jennifer A. Cutinha, Rob H. Bemthuis and Faiza A. Bukhsh

Abstract: When aggregating logistic event data from different supply chain actors and information systems for process mining, interoperability, data loss, and data quality are common challenges. This position paper proposes and evaluates the use of the Open Trip Model (OTM) for process mining. Inspired by the current industrial use of the OTM for reporting and business intelligence, we believe that the data model of OTM can be utilized for unified storage, integration, interoperability, and querying of logistic event data. Therefore, the OTM data model is mapped to a generic event log structure to satisfy the minimum requirements for process mining. A demonstrative scenario is used to show how event data can be extracted from the OTM’s default scenario dataset to create an event log as the starting point for process mining. Thus, this approach provides a foundation for future research about interoperability challenges and unifying process mining models based on industry standards, and a starting point for developing process mining applications in the logistics industry.
Download

Paper Nr: 196
Title:

Towards a Blockchain Architecture for Animal Sanitary Control

Authors:

Glenio Descovi, Vinícius Maran, Denilson Ebling and Alencar Machado

Abstract: It is known that Blockchain technology is widely used in cryptocurrency transactions, the technology has become popular with Bitcoin but recently, it has been applied in many areas, including the animal sanitary control. It can be said that Blockchain is an immutable ledger, where systems can store transactions, documents, history, countless data generated in any process. In Brazil, an animal sanitary control platform called Plataforma de Defesa Sanitaria Animal (PDSA-RS) was recently proposed and is widely used in the state of Rio Grande do Sul. Actually, the PDSA-RS uses a centered server with relevant information for the animal sanitary control process. This work presents the process of defining and integrating the PDSA-RS with a private Blockchain managed by a software architecture. The main goal of this integration is to give traceability, immutability and transparency in the existing process and the data generated in the certification of poultry establishments that are part of the animal sanitary control. In the evaluation of the integration process, it was possible to observe that it the blockchain extension offered persistence, anonymity and auditability of the information related to the animal sanitary control.
Download

Paper Nr: 207
Title:

Quality Management in Social Business Intelligence Projects

Authors:

María J. Aramburu, Rafael Berlanga and Indira Lanza-Cruz

Abstract: Social networks have become a new source of useful information for companies. Increasing the value of social data requires, first, assessing and improving the quality of the relevant data and, subsequently, developing practical solutions that apply them in business intelligence tasks. This paper focuses on the Twitter social network and the processing of social data for business intelligence projects. With this purpose, the paper starts by defining the special requirements of the analysis cubes of a Social Business Intelligence (SoBI) project and by reviewing previous work to demonstrate the lack of valid approaches to this problem. Afterwards, we present a new data processing method for SoBI projects whose main contribution is a phase of data exploration and profiling that serves to build a quality data collection with respect to the analysis objectives of the project.
Download

Paper Nr: 215
Title:

Reference Architecture for Efficient Computer Integrated Manufacturing

Authors:

Abdelkarim Remli, Amal Khtira and Bouchra El Asri

Abstract: The technological progress combined with the rapidly changing customer demands are pushing for continuous changes in manufacturing environments. This led industrial companies into seeking the optimization of their processes through Computer Integrated Manufacturing (CIM). The main purpose of the latter is to link the shop floor systems to the high business layer ones. Based on a literature review that we have conducted earlier on CIM architectures, we have identified the different aspects related to CIM and detected the limitations of the existing approaches. With the aim of overcoming these limitations, we present in this paper a reference architecture for CIM based on the ISA-95 standard. We also explain how the proposed architecture was applied on a case study from the automotive industry.
Download

Paper Nr: 219
Title:

On the Evaluation of Classification Methods Applied to Requests for Revision of Registered Debts

Authors:

Helton S. Lima, Damires S. Fernandes, Thiago M. Moura and Daniel Sabóia

Abstract: Tax management is a complex problem faced by governments around the world. In Brazil, in order to help solving problems in this area, data analytics has been increasingly used to support and enhance tax management processes. In this light, this work proposes an approach which uses supervised learning in order to classify requests of an administrative service. The requests at hand are named as Requests for Revision of Registered Debt (R3Ds). The service underlying such requests is offered by the Brazil’s National Treasury Attorney-General's Office and usually deals with a high volume of registrations. The experimental evaluation accomplished in this work presents some promising results. The obtained classification models present good levels of accuracy, area under ROC curve and recall. Four evaluation scenarios have been experimented, including imbalanced and balanced data. The Random Forest model achieves the best results in all the evaluated scenarios.
Download

Paper Nr: 237
Title:

A Blockchain Approach to Support Vaccination Process in a Country

Authors:

Andrei Carniel, Gustavo Leme, Juliana M. Bezerra and Celso M. Hirata

Abstract: Vaccines are important means to prevent diseases and save lives. Data related to vaccination process, such as vaccine identification and number of vaccinated people, is critical to the production and distribution of vaccines in a way to achieve the desired immunization in a country. We propose a reliable approach based on Blockchain to manage vaccination data. We describe the roles played in a vaccination process as well as their relations. Aiming to validate our approach with a prototype, we present the data required in two scenarios, including the registration of a vaccine administration and the visualization of the vaccination history of a citizen. We discuss the potential of our proposal, by indicating analysis that can be conducted with the vaccination data, as well as challenges to be addressed in further investigations.
Download

Paper Nr: 239
Title:

Challenges of Infrastructure in Cloud Computing for Education Field: A Systemmatic Literature Review

Authors:

Nurma A. Wigati, Ari Wibisono and Achmad N. Hidayanto

Abstract: The usage of cloud computing is needed for educational sector, especially in universities to ease all of administration and learning access to everyone. So that, the development infrastructure cloud computing need to concern the aim of the university. Infrastructure as a Service (IaaS) in cloud computing has problems, like resource, security, and finance. This study follows Kitchenham protocol to explore the challenges in the cloud computing as a infrastructure service for education field literature systematically, then reviewed their techniques which can use to all of university in Indonesia. This study recommends that the management of IaaS should be considered well to get result development of cloud computing optimally.
Download

Paper Nr: 247
Title:

A Consortied Ledger-based Cloud Computing Provider Users Reputation Architecture

Authors:

Gabriel E. Vasques and Adriano Fiorese

Abstract: Several users and organizations have been attracted by the benefits offered by the cloud computing services. However, there are still several security issues and challenges in these environments. Controlling access to providers is an important task and must be carried out safely. In this sense, this paper presents a ledger-based collaborative user reputation architecture for a cloud providers’ consortium. The reputation is based in two indicators: objective and subjective ones. The objective data corresponds to the user’s session data. Subjective data, on the other hand, corresponds to the providers’ feedback about users and data obtained from external sources. The combination of these two indicators defines the reputation value. It aims to avoid bias in the evaluations carried out by the providers and possible conflict of interests. In order to evaluate the proposed architecture, a user cloud providers scenario was developed. Evaluation results show applicability of storing and providing users reputation values to participating providers.
Download

Paper Nr: 70
Title:

Automatic Extraction of a Document-oriented NoSQL Schema

Authors:

Fatma Abdelhedi, Amal A. Brahim, Hela Rajhi, Rabah T. Ferhat and Gilles Zurfluh

Abstract: The NoSQL systems make it possible to manage Databases (DB) verifying the 3Vs: Volume, Variety and Velocity. Most of these systems are characterized by the property schemaless which means absence of the data schema when creating a DB. This property provides undeniable flexibility by allowing the schema to evolve while the DB is in use; however, it is a major obstacle for developers and decision makers. Indeed, the expression of queries (SQL type) requires precise knowledge of this schema. In this article, we provide a process for automatically extracting the schema from a NoSQL document-oriented DB. To do this, we use the MDA (Model Driven Architecture). From a NoSQL DB, we propose transformation rules to generate the schema. An experiment of the extraction process was carried out on a medical application.
Download

Paper Nr: 145
Title:

Management Support Systems Model for Incident Resolution in FinTech based on Business Intelligence

Authors:

María C. Zúñiga, Walter Fuertes, Hugo V. Flores and Theofilos Toulkeridis

Abstract: Financial technology corporations (FinTech) specialize in the electronic processing of business transactions and compensation of charges and payments. Such operations have a technological platform that connects multiple financial institutions with companies of the public and private sector. In its constant concern for the provision of efficient services, the company created a unit to guarantee the quality and availability of 24x7x365 of its services by granting their clients confidence regarding online-financial environments through high-security timely security standards management of incidents. However, poor management of incident resolution was detected as there are is a lack of tools to monitor transactional behavior or identify anomalies. Consequently, resolution time has been delayed and, therefore, continuity and regular operation of services. In this sense, economic losses are frequent, yet the real loss results in its confidence and reputation. In response to this problematic issue, the current study proposes developing a support model of information management for the appropriate and timely resolution of incidents by analyzing historical information, which allows to detect of anomalies in transactional behavior and improve resolution time of events affecting financial services. The used methodology is ad-hoc and consists of various phases, such as identifying the present situation. Afterward, it builds the solution based on Ralph Kimball and Scrum methodologies and validates its result. With the implementation of the work, the business intelligence model improves incident management by providing indicators for the timely detection of anomalies in financial transactions.
Download

Paper Nr: 176
Title:

A Text Similarity-based Process for Extracting JSON Conceptual Schemas

Authors:

Fhabiana T. Machado, Deise Saccol, Eduardo Piveta, Renata Padilha and Ezequiel Ribeiro

Abstract: NoSQL (Not Only SQL) document-oriented databases stand out because of the need for scalability. This storage model promises flexibility in documents, using files and data sources in JSON (JavaScript Object Notation) format. It also allows documents within the same collection to have different fields. Such differences occur in database integration scenarios. When the user needs to access different datasources in an unified way, it can be troublesome, as there is no standardization in the structures. In this sense, this work presents a process for conceptual schema extraction in JSON datasets. Our proposal analyzes fields representing the same information, but written differently. In the context of this work, differences in writing are related to treatment of synonyms and character. To perform this analysis, techniques such as character-based and knowledge-based similarity functions, as well as stemming are used. Therefore, we specify a process to extract the implicit schema present in these data sources, applying different textual equivalence techniques in field names. We applied the process in an experiment from the scientific publications domain, correctly identifying 80% of the equivalent terms. This process outputs an unified conceptual schema and the respective mappings for the equivalent terms contributing to the schema integration’s problem.
Download

Paper Nr: 179
Title:

Estimated Costs for Single Tuition Fee (STF) using Naïve Bayes Method

Authors:

Juhriyansyah Dalle, Dwi Hastuti, Taufik Rahman, A. Akrim, Sri Erliani, Taufik Hidayat, Siska Devina, Agustina Lestari, B. Baharuddin, Hesti Fibriasari, Akhmad Murjani, Erika Lismayani, Ahmad Yusuf and Candra K. Negara

Abstract: The difference in the amount of single tuition fee (STF) paid by students with middle and upper economic backgrounds causes an injustice gap. This is partly due to the instability of the system used, especially in terms of STF determination methods. Other additional shortcomings include the criteria entered the system that is still not enough to be considered in determining STF. Therefore, this study aims to build a web based STF payment system using the Naïve Bayes, probability, and statistical methods for students to determine the cost of an institution's tuition fee easily. System testing is carried out by comparing the output and verification results. This showed that the estimated determination of the cost of STF payments is suitable, with 83.3%.
Download

Paper Nr: 187
Title:

Digital Lighthouse: A Platform for Monitoring Public Groups in WhatsApp

Authors:

Ivandro Claudino de Sá, José M. Monteiro, José W. Franco da Silva, Leonardo M. Medeiros, Pedro C. Mourão and Lucas C. Carneiro da Cunha

Abstract: The large-scale dissemination of misinformation through social media has become a critical issue, harming social stability, democracy, and public health. In Brazil, 48% of the population uses WhatsApp to get news. So, many groups have been used this instant messaging application to spread misinformation, especially as part of articulated political or ideological campaigns. In this context, WhatsApp provides an important feature: the public groups. These groups are so suitable for misinformation dissemination. Thus, developing software frameworks to monitor the misinformation spreading in WhatsApp public groups has become a field of high interest both in academia, government and industry. In this work, we present an entire platform, called Digital Lighthouse, that aims for finding WhatsApp public groups, besides extracting, cleaning, analyzing, and visualizing misinformation that spread in such groups. Using the Digital Lighthouse, we built three different datasets. We hope that our platform can help journalists and researchers to understand the misinformation propagation in Brazil.
Download

Paper Nr: 204
Title:

Architectural Challenges on the Integration of e-Commerce and ERP Systems: A Case Study

Authors:

Fábio Santos and Ricardo Martinho

Abstract: Many retail companies had to go online before their Enterprise Resource Planning (ERP)-type systems were ready to fulfill all business requirements. Their overall daily operation still heavily depends on these highly customized systems often mandatory because of legal obligations, which frequently come without e-commerce “off the shelf” integration. This paper identifies main challenges derived out of the architectural and integration requirements from a case study at an e-tailer company that operates via two sales channels: online store and third-party marketplaces. These challenges led to the definition of a system architecture and implementation considerations for this common integration scenario, which was validated through its implementation. Our proposed approach allows ERP-dependent organizations to start selling online with open-source technologies, avoiding extra ERP licensing and hidden maintenance costs.
Download

Paper Nr: 240
Title:

Automated Data Extraction from PDF Documents: Application to Large Sets of Educational Tests

Authors:

Karina Wiechork and Andrea S. Charão

Abstract: The massive production of documents in portable document format (PDF) format has motivated research on automated extraction of data contained in these files. This work is mainly focused on extractions of natively digital PDF documents, made available in large repositories of educational exams. For this, the educational tests applied at Enade were used and collected automatically using scripts developed with Scrapy. The files used for the evaluation comprise 343 tests, with 11.196 objective and discursive questions, 396 answers, with 14.475 alternatives extracted from the objective questions. For the construction of ground truth in the tests, the Aletheia tool was used. For the extractions, existing tools were used that perform data extractions in PDF files: tabular data extractions, with Excalibur and Tabula for answer extractions, textual content extractions, with CyberPDF and PDFMiner to extract the questions, and extractions of regions of interest, with Aletheia and ExamClipper for the cutouts of the questions. The results of the extractions point out some limitations in relation to the diversity of layout in each year of application. The extracted data provide useful information in a wide variety of fields, including academic research and support for students and teachers.
Download

Paper Nr: 246
Title:

Data Mining, Business Intelligence, Grid and Utility Computing: A Bibliometric Review of the Literature from 2015 to 2020

Authors:

Ernani Damasceno, Ana Azevedo and Manuel Pérez-Cota

Abstract: Bibliometric review is a type of systematic review used to analyze a wide range of articles or scientific publications using statistical tools to identify trends in many articles. However, there are some areas of study that are more consolidated with a wide range of studies, such as Data Mining (DM) and Business Intelligence (BI). New tools are being researched to provide a more effective way of using technologies in organizations, namely, Grid Computing (GC) and Utility Computing (UC). Thus, this article aims at showing analysis of publications databases in order to verify whether there are studies on DM and BI together with GC and UC from 2015 to 2020. The purpose is to demonstrate not only the quantity but also some aspects, such as relations between topics, number of publications per year, main countries and institutions, research network and H5 index from Google Scholar. Finally, the results are shown through the number of publications, percentages, and the most relevant subjects, based on Essential Science Indicators based on Essential Science Indicators, which determine the influences of countries, institutes, and authors in a specific field of study.
Download

Area 2 - Artificial Intelligence and Decision Support Systems

Full Papers
Paper Nr: 26
Title:

Detecting Non-routine Customer Support E-Mails

Authors:

Anton Borg and Jim Ahlstrand

Abstract: Customer support can affect customer churn both positively and negatively. By identify non-routine e-mails to be handled by senior customer support agents, the customer support experience can potentially be improved. Complex e-mails, i.e. non-routine, might require longer time to handle, being more suitable for senior staff. Non-routine e-mails can be considered anomalous. This paper investigates an approach for context-based unsupervised anomaly detection that can assign each e-mail an anomaly score. This is investigated in customer support setting with 43523 e-mails. Context-based anomalies are investigated over different time resolutions, by multiple algorithms. The likelihood of anomalous e-mails can be considered increased when identified by several algorithms or over multiple time resolutions. The approach is suitable to implement as a decision support system for customer support agents in detecting e-mails that should be handled by senior staff.
Download

Paper Nr: 40
Title:

LDBNN: A Local Density-based Nearest Neighbor Classifier

Authors:

Joel L. Carbonera

Abstract: K-Nearest Neighbor (KNN) is a very simple and powerful classification algorithm. In this paper, we propose a new KNN-based classifier, called local density-based nearest neighbors (LDBNN). It considers that a target instance should be classified in a class whose the k nearest neighbors constitute a dense region, where the neighbors are near to each other and also near to the target instance. The performance of the proposed algorithm was compared with the performance of 5 important KNN-based classifiers. The performance was evaluated in terms of accuracy in 16 well-known datasets. The experimental results show that the proposed algorithm achieves the highest accuracy in most of the datasets.
Download

Paper Nr: 41
Title:

A Global Density-based Approach for Instance Selection

Authors:

Joel L. Carbonera

Abstract: Due to the increasing size of the datasets, instance selection techniques have been applied for reducing the computational resources involved in data mining and machine learning tasks. In this paper, we propose a global density-based approach for selecting instances. The algorithm selects only the densest instances in a given neighborhood and the instances in the boundaries among classes, while excludes potentially harmful instances. Our method was evaluated on 14 well-known datasets used in a classification task. The performance of the proposed algorithm was compared to the performances of 8 prototype selection algorithms in terms of accuracy and reduction rate. The experimental results show that, in general, the proposed algorithm provides a good trade-off between reduction rate and accuracy with reasonable time complexity.
Download

Paper Nr: 51
Title:

Greedy Scheduling: A Neural Network Method to Reduce Task Failure in Software Crowdsourcing

Authors:

Jordan Urbaczek, Razieh Saremi, Mostaan L. Saremi and Julian Togelius

Abstract: Highly dynamic and competitive crowdsourcing software development (CSD) marketplaces may experience task failure due to unforeseen reasons, such as increased competition over shared supplier resources, or uncertainty associated with a dynamic worker supply. Existing analysis reveals an average task failure ratio of 15.7% in software crowdsourcing markets.These lead to an increasing need for scheduling support for CSD managers to improve the efficiency and predictability of crowdsourcing processes and outcomes. To that end, this research proposes a task scheduling method based on neural networks, and develop a system that can predict and analyze task failure probability upon arrival. More specifically, the model uses a range of input variables, including the number of open tasks in the platform, the average task similarity between arriving tasks and open tasks, the winner’s monetary prize, and task duration, to predict the probability of task failure on the planned arrival date and two surplus days. This prediction will offer the recommended day associated with lowest task failure probability to post the task. The model on average provided 4% lower failure probability per project. The proposed model empowers crowdsourcing managers to explore potential crowdsourcing outcomes with respect to different task arrival strategies.
Download

Paper Nr: 57
Title:

A Decision Support System to Evaluate Suppliers in the Context of Global Service Providers

Authors:

Bruno P. Bruck, Dario Vezzali, Manuel Iori, Carlo A. Magni and Daniele Pretolani

Abstract: In this paper, we present a decision support system (DSS) developed for a global service provider (GSP), which solves a real-world supplier selection problem. The GSP operates in the Italian market of facility management, supplying customers with a variety of services. These services are subcontracted to external qualified suppliers spread all over Italy and chosen on the basis of several criteria, such as service quality, availability and proximity. Selecting the best supplier is a complex task due to the large number of suppliers and the great variety of facility management services offered by the GSP. In the proposed DSS, the choice of the best supplier for a certain service is made according to a thorough multi-criteria analysis. The weights for the criteria were generated by implementing both a simplified analytic hierarchy process and a revised Simos’ procedure, later validated by the decision makers at the GSP. The DSS provides quick access to historical performance data, visual tools to aid decisions, and a suggested ranked list of suppliers for each given contract. The effectiveness of the proposed system was assessed by means of extensive simulations on a seven-year period of real-data and several rounds of validation with the company.
Download

Paper Nr: 60
Title:

Real-time Periodic Advertisement Recommendation Optimization under Delivery Constraint using Quantum-inspired Computer

Authors:

Fan Mo, Huida Jiao, Shun Morisawa, Makoto Nakamura, Koichi Kimura, Hisanori Fujisawa, Masafumi Ohtsuka and Hayato Yamana

Abstract: For commercial companies, tuning advertisement delivery to achieve a high conversion rate (CVR) is crucial for improving advertising effectiveness. Because advertisers use demand-side platforms (DSP) to deliver a certain number of ads within a fixed period, it is challenging for DSP to maximize CVR while satisfying delivery constraints such as the number of delivered ads in each category. Although previous research aimed to optimize the combinational problem under various constraints, its periodic updates remained an open question because of its time complexity. Our work is the first attempt to adopt digital annealers (DAs), which are quantum-inspired computers manufactured by Fujitsu Ltd., to achieve real-time periodic ad optimization. With periodic optimization in a short time, we have much chance to increase ad recommendation precision. First, we exploit each user’s behavior according to his visited web pages and then predict his CVR for each ad category. Second, we transform the optimization problem into a quadratic unconstrained binary optimization model applying to the DA. The experimental evaluations on real log data show that our proposed method improves accuracy score from 0.237 to 0.322 while shortening the periodic advertisement recommendation from 526s to 108s (4.9 times speed-up) in comparison with traditional algorithms.
Download

Paper Nr: 73
Title:

Agent-based Decentral Production Planning and Control: A New Approach for Multi-resource Scheduling

Authors:

Martin Krockert, Marvin Matthes and Torsten Munkelt

Abstract: Manufacturing jobs commonly require more than one resource in order to equip machines with tools and process jobs. To achieve a feasible production plan and control its execution in an agent-based decentral production we developed a new approach presented in this paper. We introduce a negotiation procedure, based on the job priority and overlapping time slots across all resources. In addition, we provide simulative evidence that our approach is superior, in terms of time-based key performance indicators, to commonly used queuing procedures and the approach provides a more stable production under uncertain customer order arrivals and deviating processing times.
Download

Paper Nr: 79
Title:

CC-separation Measure Applied in Business Group Decision Making

Authors:

Jonata C. Wieczynski, Giancarlo Lucca, Eduardo N. Borges, Graçaliz P. Dimuro, Rodolfo Lourenzutti and Humberto Bustince

Abstract: In business, one of the most important management functions is decision making. The Group Modular Choquet Random TOPSIS (GMC-RTOPSIS) is a Multi-Criteria Decision Making (MCDM) method that can work with multiple heterogeneous data types. This method uses the Choquet integral to deal with the interaction between different criteria. The Choquet integral has been generalized and applied in various fields of study, such as imaging processing, brain-computer interface, and classification problems. By generalizing the so-called extended Choquet integral by copulas, the concept of CC-integrals has been introduced, presenting satisfactory results when used to aggregate the information in Fuzzy Rule-Based Classification Systems. Taking this into consideration, in this paper, we applied 11 different CC-integrals in the GMC-RTOPSIS. The results demonstrated that this approach has the advantage of allowing more flexibility and certainty in the choosing process by giving a higher separation between the first and second-ranked alternatives.
Download

Paper Nr: 88
Title:

An Integrated Task and Personnel Scheduling Problem to Optimize Distributed Services in Hospitals

Authors:

Nícolas P. Campana, Giorgio Zucchi, Manuel Iori, Carlo A. Magni and Anand Subramanian

Abstract: This paper addresses a real-life task and personnel scheduling problem arising in a large Italian company that needs to provide cleaning services inside a hospital. In this case study, the challenge is to determine a schedule of the employees to clean the whole hospital aiming to minimize the total labor cost, taking into account the fact that the building is a complex structure with multiple levels and each room has different peculiarity. To solve the problem, we propose a three-step approach using mathematical models and metaheuristic algorithms. The solution obtained indicates that the schedule attained by our method is better than the one generated by the company. In addition, to test and validate our approach more thoroughly, a set of artificial instances have been created. The results indicate that our method can help organizations to quickly generate and test a large variety of solutions. Our findings can be of general interest for other personnel scheduling problems involving distributed services.
Download

Paper Nr: 96
Title:

Assessing the Effectiveness of Multilingual Transformer-based Text Embeddings for Named Entity Recognition in Portuguese

Authors:

Diego L. Santos, Frederico C. Dutra, Fernando S. Parreiras and Wladmir C. Brandão

Abstract: Recent state of the art named entity recognition approaches are based on deep neural networks that use an attention mechanism to learn how to perform the extraction of named entities from relevant fragments of text. Usually, training models in a specific language leads to effective recognition, but it requires a lot of time and computational resources. However, fine-tuning a pre-trained multilingual model can be simpler and faster, but there is a question on how effective that recognition model can be. This article exploits multilingual models for named entity recognition by adapting and training tranformer-based architectures for Portuguese, a challenging complex language. Experimental results show that multilingual trasformer-based text embeddings approaches fine tuned with a large dataset outperforms state of the art trasformer-based models trained specifically for Portuguese. In particular, we build a comprehensive dataset from different versions of HAREM to train our multilingual transformer-based text embedding approach, which achieves 88.0% of precision and 87.8% in F1 in named entity recognition for Portuguese, with gains of up to 9.89% of precision and 11.60% in F1 compared to the state of the art single-lingual approach trained specifically for Portuguese.
Download

Paper Nr: 102
Title:

An Improved Deep Learning Application for Leaf Shape Reconstruction and Damage Estimation

Authors:

Mateus C. Silva, Servio P. Ribeiro, Andrea C. Bianchi and Ricardo R. Oliveira

Abstract: Leaf damage estimation is an important research method, metric, and topic regarding both agricultural and ecological studies. The majority of previous studies that approach shape reconstruction work with parametric curves, lacking generality when treating leaves with different shapes. Other appliances try to calculate the damage without estimating the original leaf form. In this work, we propose a procedure to predict the original leaf shape and calculate its defoliation based on a Conditional Generative Adversarial Network (Conditional GAN). We trained and validated the algorithm with a dataset with leaf images from 33 different species. Also, we tested the produced model in another dataset, containing images from leaves from 153 different species. The results indicate that this model is better than the literature, and the solution potentially works with different leaf shapes, even from untrained species.
Download

Paper Nr: 112
Title:

Embedded Edge Artificial Intelligence for Longitudinal Rip Detection in Conveyor Belt Applied at the Industrial Mining Environment

Authors:

Emerson Klippel, Ricardo R. Oliveira, Dmitry Maslov, Andrea C. Bianchi, Saul D. Silva and Charles B. Garrocho

Abstract: Failures in the detection of longitudinal rips on conveyor belts are events considered catastrophic in mining environments due to the financial losses caused and the exposure to safety risks of the maintenance teams. The longitudinal rip detection technologies used today have limitations, being the most reliable systems expensive and complex and the simplest and cheapest systems unreliable. In view of this scenario, we studied the implementation of a longitudinal rip detection solution based on images of the conveyor belt. The images will be collected in real time and inference, rip detection, will be carried out locally using a deep neural network model executed on device edge. The results obtained with the prototype, in controlled field tests, were satisfactory and showed the feasibility of using deep neural network algorithms executed on device edge. These results encourage the development of a complete solution for the detection of defects in conveyor belts considering all the operational conditions found in the mining environment.
Download

Paper Nr: 132
Title:

PlaceProfile: Employing Visual and Cluster Analysis to Profile Regions based on Points of Interest

Authors:

Rafael M. Christófano, Wilson E. Marcílio Júnior and Danilo M. Eler

Abstract: Understanding how commercial and social activities and points of interest are located in a city is essential to plan efficient cities in smart mobility. Over the years, the growth of data sources from distinct online social networks has enabled new perspectives to applications that provide mechanisms to aid in comprehension of how people displaces between different regions within a city. To support enterprises and governments better understand and compare distinct regions of a city, this work proposes a web application called PlaceProfile to perform visual profiling of city areas based on iconographic visualization and to label areas based on clustering algorithms. The visualization results are overlayered on Google Maps to enrich the map layout and aid analyst in understanding region profiling at a glance. Besides, PlaceProfile coordinates a radar chart with areas selected by the user to enable detailed inspection of the frequency of categories of points of interest (POIs). This linked views approach also supports clustering algorithms’ explainability by providing inspections of the attributes used to compute similarities. We employed the proposed approach in a case study in the São Paulo city, Brazil.
Download

Paper Nr: 142
Title:

Global Reward Design for Cooperative Agents to Achieve Flexible Production Control under Real-time Constraints

Authors:

Sebastian Pol, Schirin Baer, Danielle Turner, Vladimir Samsonov and Tobias Meisen

Abstract: In flexible manufacturing, efficient production requires reactive control. We present a solution for solving practical and flexible job shop scheduling problems, focusing on minimizing total makespan while dealing with many product variants and unseen production scenarios. In our system, each product is controlled by an independent reinforcement learning agent for resource allocation and transportation. A significant challenge in multi-agent solutions is collaboration between agents for a common optimization objective. We implement and compare two global reward designs enabling cooperation between the agents during production. Namely, we use dense local rewards augmented with global reward factors, and a sparse global reward design. The agents are trained on randomized product combinations. We validate the results using unseen scheduling scenarios to evaluate generalization. Our goal is not to outperform existing domain-specific heuristics for total makespan, but to generate comparably good schedules with the advantage of being able to instantaneously react to unforeseen events. While the implemented reward designs show very promising results, the dense reward design performs slightly better while the sparse reward design is much more intuitive to implement. We benchmark our results against simulated annealing based on total makespan and computation time, showing that we achieve comparable results with reactive behavior.
Download

Paper Nr: 150
Title:

Edge Deep Learning Applied to Granulometric Analysis on Quasi-particles from the Hybrid Pelletized Sinter (HPS) Process

Authors:

Natália C. Meira, Mateus C. Silva, Ricardo R. Oliveira, Aline Souza, Thiago D’Angelo and Cláudio B. Vieira

Abstract: The mining and metallurgical industry seeks to adapt to Industry 4.0 with the implementation of Artificial Intelligence in the processes. The purpose of this paper is to develop the first steps of an Artificial Intelligence in Deep Learning with Edge Computing to recognize the quasi-particles from the Hybrid Pelletized Sinter (HPS) process and provide its particle size distribution. We trained our model with the aXeleRate tool using the Keras-Tensorflow framework and the MobileNet architecture and tested it with an embedded system using the SiPEED MaiX Dock board. Our model obtained 98.60% accuracy in training validation using real and synthetic images and 100% accuracy in tests with synthetic images and 70% recall. The tests’ results indicate the feasibility of the proposed system, but with probable overfitting in the training stage.
Download

Paper Nr: 202
Title:

Performance Analysis of Different Operators in Genetic Algorithm for Solving Continuous and Discrete Optimization Problems

Authors:

Shilun Song, Hu Jin and Qiang Yang

Abstract: Genetic algorithm (GA), as a powerful meta-heuristics algorithm, has broad applicability to different optimization problems. Although there are many researches about GA, few works have been done to synthetically summarize the impact of different genetic operators and different parameter settings on GA. To fill this gap, this paper has conducted extensive experiments on GA to investigate the influence of different operators and parameter settings in solving both continuous and discrete optimizations. Experiments on 16 nonlinear optimization (NLO) problems and 9 traveling salesman problems (TSP) show that tournament selection, uniform crossover, and a novel combination-based mutation are the best choice for continuous problems, while roulette wheel selection, distance preserving crossover, and swapping mutation are the best choices for discrete problems. It is expected that this work provides valuable suggestions for users and new learners.
Download

Paper Nr: 225
Title:

Predicting Type Annotations for Python using Embeddings from Graph Neural Networks

Authors:

Vladimir Ivanov, Vitaly Romanov and Giancarlo Succi

Abstract: An intelligent tool for type annotations in Python would increase the productivity of developers. Python is a dynamic programming language, and predicting types using static analysis is difficult. Existing techniques for type prediction use deep learning models originated in the area of Natural Language Processing. These models depend on the quality of embeddings for source code tokens. We compared approaches for pre-training embeddings for source code. Specifically, we compared FastText embeddings to embeddings trained with Graph Neural Networks (GNN). Our experiments showed that GNN embeddings outperformed FastText embeddings on the task of type prediction. Moreover, they seem to encode complementary information since the prediction quality increases when both types of embeddings are used.
Download

Short Papers
Paper Nr: 16
Title:

Comparing Feature Engineering and Deep Learning Methods for Automated Essay Scoring of Brazilian National High School Examination

Authors:

Aluizio H. Filho, Fernando Concatto, Hércules Antonio do Prado and Edilson Ferneda

Abstract: The National High School Exam (ENEM) in Brazil is a test applied annually to assess students before entering higher education. On average, over 7.5 million students participate in this test. In the same sense, large educational groups need to conduct tests for students preparing for ENEM. For correcting each essay, it is necessary at least two evaluators, which makes the process time consuming and very expensive. One alternative for substantially reducing the cost and speed up the correction of essays is to replace one human evaluator by an automated process. This paper presents a computational approach for essays correction able to replace one human evaluator. Techniques based on feature engineering and deep learning were compared, aiming to obtain the best accuracy among them. It was found that is possible to reach accuracy indexes close to 100% in the most frequent classes that comprise near 80% of the essays set.
Download

Paper Nr: 34
Title:

CHILDATTEND: A Neural Network based Approach to Assess Child Attendance in Social Project Activities

Authors:

João A. Estrela and Wladmir C. Brandão

Abstract: Social project sponsors demand transparency in the application of donated resources. A challenge for nongovernmental organizations that support children is to provide proof of children’s participation in social project activities for sponsors. Additionally, the proof of participation by roll call or paper reports is much less convincing than automatic attendance checking by image analysis. Despite recent advances in face recognition, there is still room for improvement when algorithms are fed with only one instance of a person’s face, since that person can significantly change over the years, especially children. Furthermore, face recognition algorithms still struggle in special cases, e.g., when there are many people in different poses and the photos are taken under variant lighting conditions. In this article we propose a neural network based approach that exploits face detection, face recognition and image alignment algorithms to identify children in activity group photos, i.e., images with many people performing activities, often on the move. Experiments show that the proposed approach is fast and identifies children in activity group photos with more than 90% accuracy.
Download

Paper Nr: 35
Title:

Evaluating a Session-based Recommender System using Prod2vec in a Commercial Application

Authors:

Hasan Tercan, Christian Bitter, Todd Bodnar, Philipp Meisen and Tobias Meisen

Abstract: Recommender systems are a central component of many online stores and product websites. An essential functionality of them is to show users new products that they do not yet know they want to buy. Since the users of the website are often unknown to the system, a product recommendation must be made using the current activities within a browser session. In this paper we address this issue in a deep learning-based product-to-product recommendation problem for a commercial website with millions of user interactions. Our proposed approach is based on a prod2vec method for product embeddings, thus recommending those products that often occur together with the target product. Following the idea of word2vec methods from the NLP domain, we train an artificial neural network on user activity data extracted from historical browser sessions. As part of several real A/B tests on the website, we prove that our approach delivers successful product recommendations and outperforms the current system in use. In addition, the results show that the performance can be significantly improved by an appropriate selection of the training data and the time range of historical user interactions.
Download

Paper Nr: 36
Title:

RECAID: A Sponsorship Recommendation Approach

Authors:

William J. Bernardes de Oliveira and Wladmir C. Brandão

Abstract: Non-government organizations play an important role in society, providing access to basic services in culture, education, health, and security for needy people. Some of these organizations raise funds for their social projects through sponsorship programs for people in poverty, deprivation, exclusion and vulnerability. The intensive use of technology for sponsors and beneficiaries matching is paramount to create more lasting bonds, maximizing the likelihood of stronger relationships, consequently raising more resources for projects. In this article we propose and evaluate a learning approach to recommend beneficiaries to sponsors. Particularly, we exploit different recommendation strategies, such as collaborative filtering with matrix factorization, content-based with bag of words and word embeddings and knowledge-based with association rules. Experimental results show that content-based strategies based on word embeddings are more effective, reaching up to 72% of performance in MAP and nDCG. Additionally, it can effectively recommend beneficiaries to sponsors even if there is less feedback information on beneficiaries and sponsors to train recommendation models.
Download

Paper Nr: 64
Title:

Opportunities and Challenges in Fall Risk Management using EHRs and Artificial Intelligence: A Systematic Review

Authors:

Henrique D. Santos, Juliana O. Damasio, Ana S. Ulbrich and Renata Vieira

Abstract: Electronic Health Records (EHRs) have led to valuable improvements to hospital practices by integrating patient information. In fact, this data can be used to develop clinical risk prediction tools. We performed a systematic literature review with the objective of analyzing current studies that use artificial intelligence techniques in EHRs data to identify in-hospital falls. We searched several digital libraries for articles that reported on the use of EHRs and artificial intelligence techniques to identify in-hospital falls. Articles were selected by three authors of this work. We compiled information on study design, use of EHR data types, and methods. We identified 21 articles, 11 about fall risk prediction and 10 covering fall detection. EHR data shows opportunities and challenges for fall risk prediction and in-hospital fall detection. There is room for improvement in developing such studies.
Download

Paper Nr: 72
Title:

Lead Time Forecasting with Machine Learning Techniques for a Pharmaceutical Supply Chain

Authors:

Maiza Biazon de Oliveira, Giorgio Zucchi, Marco Lippi, Douglas F. Cordeiro, Núbia Rosa da Silva and Manuel Iori

Abstract: Purchasing lead time is the time elapsed between the moment in which an order for a good is sent to a supplier and the moment in which the order is delivered to the company that requested it. Forecasting of purchasing lead time is an essential task in the planning, management and control of industrial processes. It is of particular importance in the context of pharmaceutical supply chain, where avoiding long waiting times is essential to provide efficient healthcare services. The forecasting of lead times is, however, a very difficult task, due to the complexity of the production processes and the significant heterogeneity in the data. In this paper, we use machine learning regression algorithms to forecast purchasing lead times in a pharmaceutical supply chain, using a real-world industrial database. We compare five algorithms, namely k-nearest neighbors, support vector machines, random forests, linear regression and multilayer perceptrons. The support vector machines approach obtained the best performance overall, with an average error lower than two days. The dataset used in our experiments is made publicly available for future research.
Download

Paper Nr: 78
Title:

Early-identification of Human Resource Trends and Innovations through Web-scraping Technology

Authors:

Alexander Smirnov, Nikolay Shilov, Alexey Kashevnik, Mikhail Petrov, Simon Brugger and Tefik Ismaili

Abstract: The paper presents an innovation management approach within the human resources (HR) management area. The approach provides possibilities to search and scrape HR trends and the joint global evaluation of those trends by an expert community. We developed a platform that scrapes human resources websites and parses the documents based on pre-defined keywords to support the innovation management approach and to identify the innovations that are applicable to the HR domain. It is based on the analysis of term occurrence frequency change during a period. The evaluations of innovations are done by HR Process Owners and the global HR Expert Community. For project staffing, innovation-specific requirements are matched with employee skill profiles.
Download

Paper Nr: 100
Title:

Application of Machine Learning Methods to Improve of the Roller Press Performance in the Pelletizing Process

Authors:

Thiago Nicoli de Abreu, Andrea C. Bianchi and Saul D. Silva

Abstract: In recent years, the technology of the roller press has become very useful in the pelletizing processes to comminute the pellet feed and increase the specific surface of the iron ore. It is known that the surface gain is directly related to the productivity and quality gains in the pelletizing process. In view of its importance, the increase in efficiency of the press becomes increasingly necessary, mainly due to its direct impact on the production chain. The large number of variables involved in its operation demonstrate that conventional methods and the knowledge of this process can be improved. For this, the work identifies the variables with the highest production in the specific surface gain, develops a classification model to determine rules of optimal operation settings and presents a model for the prediction of the specific surface variable, seeking gains in determining performance of this asset.
Download

Paper Nr: 101
Title:

IDiSSC: Edge-computing-based Intelligent Diagnosis Support System for Citrus Inspection

Authors:

Mateus C. Silva, Jonathan C. Ferreira da Silva and Ricardo R. Oliveira

Abstract: Orange and citrus agriculture has a significant economic role, especially in tropical countries. The use of edge systems with machine learning techniques presents a perspective to improve the present techniques, with faster tools aiding the inspection diagnostics. The usage of cost- and resource-restrictive devices to create these solutions improves this technique’s reach capability and reproducibility. In this perspective, we propose a novel edge-computing-based intelligent diagnosis support system performing a pseudospectral analysis to improve the orange inspection processes. Our results indicate that traditional machine learning methods reach over 92% accuracy, reaching 99% on the best performance technique with Artificial Neural Networks in the binary classification stage. For multiple classes, the accuracy varies from 97% up to 98%, also reaching the best performance with Artificial Neural Networks. Finally, the Random Forest and Artificial Neural Network obtained the best results, considering algorithm parameters and embedded hardware performance. These results enforce the feasibility of the proposed application.
Download

Paper Nr: 128
Title:

A Robust Real-time Component for Personal Protective Equipment Detection in an Industrial Setting

Authors:

Pedro Torres, André Davys, Thuener Silva, Luiz Schirmer, André Kuramoto, Bruno Itagyba, Cristiane Salgado, Sidney Comandulli, Patricia Ventura, Leonardo Fialho, Marinho Fischer, Marcos Kalinowski, Simone Barbosa and Hélio Lopes

Abstract: In large industries, such as construction, metallurgy, and oil, workers are continually exposed to various hazards in their workplace. Accordingly to the International Labor Organization (ILO), there are 340 million occupational accidents annually. Personal Protective Equipment (PPE) is used to ensure the essential protection of workers’ health and safety. There is a great effort to ensure that these types of equipment are used properly. In such an environment, it is common to have closed-circuit television (CCTV) cameras to monitor workers, as those can be used to verify the PPE’s proper usage. Some works address this problem using CCTV images; however, they frequently can not deal with multiples safe equipment usage detection and others even skip the verification phase, making only the detection. In this paper, we propose a novel cognitive safety analysis component for a monitoring system. This component acts to detect the proper usage of PPE’s in real-time using data stream from regular CCTV cameras. We built the system component based on the top of state-of-art deep learning techniques for object detection. The methodology is robust with consistent and promising results for Mean Average Precision (80.19% mAP) and can act in real-time (80 FPS).
Download

Paper Nr: 146
Title:

Towards the Automation of Industrial Data Science: A Meta-learning based Approach

Authors:

Moncef Garouani, Adeel Ahmad, Mourad Bouneffa, Arnaud Lewandowski, Gregory Bourguin and Mohamed Hamlich

Abstract: In context of the fourth industrial revolution (industry 4.0), the industrial big data is subject to grow rapidly to respond the agile industrial computing and manufacturing technologies. This data evolution can be captured using ubiquitous integrated sensors and multiple smart machines. We believe the use of data science methodologies, for the selection of models and configuration of hyper-parameters, may help to better control such data evolution. But, at the same time, the industrial practitioners and researchers often lack machine-learning expertise to directly retrieve the benefit from valuable manufacturing big data. Such a lack poses the major obstacle to yield value from even-though familiar data. In this case, a collaboration with data scientists may become an exigence along with the extensive machine learning knowledge which presumably may result to pursue further delays and effort. Multiple approaches for automating machine learning (AutoML) have been proposed for the past recent years in order to alleviate this deficiency. These approaches are expected to perform better along with accomplishment of computing resources which are mostly not readily accessible. To address this research challenge, in this paper, we propose a meta-learning based approach that may serve an effective decision support system for the AutoML process.
Download

Paper Nr: 165
Title:

Online Non-metric Facility Location with Service Installation Costs

Authors:

Christine Markarian

Abstract: In this paper, we study the non-metric Online Facility Location with Service Installation Costs problem (OFL-SIC), an extension of the well-known non-metric Online Facility Location problem. In OFL-SIC, we are given a set of facilities, a set of services, and a set of requests arriving over time. Each request is composed of a subset of the services. Facilities are enabled to offer a subset of the services when being opened and an algorithm has to ensure that each arriving request is connected to a set of open facilities jointly offering the requested services. Opening a facility incurs an opening cost and for each offered service, there is a service installation cost that needs to be paid if the algorithm decides to install the service at the facility. Connecting a request to an open facility incurs a connecting cost, which is equal to the distance between the request and the facility. The goal is to minimize the total opening, service installation, and connecting costs. We propose the first online algorithm for non-metric OFL-SIC and show that it is asymptotically optimal under the standard notion of competitive analysis which is used to evaluate the performance of online algorithms.
Download

Paper Nr: 169
Title:

Algorithmic View of Online Prize-collecting Optimization Problems

Authors:

Christine Markarian and Abdul N. El-Kassar

Abstract: Online algorithms have been a cornerstone of research in network design problems. Unlike in classical offline algorithms, the input to an online algorithm is revealed in portions over time and the online algorithm reacts to each portion while targeting a given optimization goal. Online algorithms are deployed in real-world optimization problems in which provably good decisions are expected in the present without knowing the future. In this paper, we consider a well-established branch of online optimization problems, known as online prize-collecting problems, in which the online algorithm may reject some input portions at the cost of paying an associated penalty. These appear in business applications in which a company decides to lose some customers by paying an associated penalty. Particularly, we study the online prize-collecting variants of three well-known optimization problems: Connected Dominating Set, Vertex Cover, and Non-metric Facility Location, namely, Online Prize-collecting Connected Dominating Set (OPC-CDS), Online Prize-collecting Vertex Cover (OPC-VC), and Online Prize-collecting Non-metric Facility Location (OPC-NFL), respectively. We propose the first online algorithms for these variants and evaluate them using competitive analysis, the standard framework to measure online algorithms, in which the online algorithm is measured against the optimal offline algorithm that knows the entire input sequence in advance and is optimal.
Download

Paper Nr: 194
Title:

Application Development for Mask Detection and Social Distancing Violation Detection using Convolutional Neural Networks

Authors:

Gokul S. Kumar and Sujala D. Shetty

Abstract: This project aims to detect face masks and social distancing on a video feed using Machine Learning and Object Detection. TensorFlow and Keras were used to build a CNN model to detect face masks and it was trained on a dataset of 3800 images. YOLO Object detection was used to detect people in a frame and check for social distancing by calculating the Euclidean distance between the centroids of the detected boxes. Developed an Android app named “StaySafe” where the user will be notified and can monitor the violations. For this purpose, Firebase was used as the backend service. If a violation is detected it will upload the image to a Firebase Cloud Storage with a notification, and the user will be able to view these images on their Android app along with the date and time. Firebase Cloud Messaging service was used to send notifications which will be handled in the android app. The app offers various features like viewing history, saving the image to the device, deleting the images from the cloud etc. A heat map can also be viewed which highlights crowded regions which can help officials identify the regions that need to be sanitized more often.
Download

Paper Nr: 199
Title:

Configurable Process Mining: Semantic Variability in Event Logs

Authors:

Aicha Khannat, Hanae Sbai and Laila Kjiri

Abstract: Configurable process model represents a reference model regrouping multiple business process variants. The configurable process models offer various benefits like reusability and more flexibility when compared to business process models. The challenges encountered while managing this type of models are related to the creation and the configuration. Recently, process mining offers techniques to discover, check conformance of models, and enhance configurable process models using a collection of event logs, that captures traces during the execution of process variants. However, existing works in configurable process discovery lack the incorporation of semantics in the resulting model. Historically, semantic process mining has been applied to event logs to improve process discovery with respect to semantic. Furthermore, from the best of our knowledge, configurable process mining approaches do not fully support semantics. In this paper, we propose a novel method to enrich the collection of event logs with configurable process ontology concepts by introducing semantic annotations that capture variability of elements present in the logs. This is a first step towards discovering a semantically enriched configurable process.
Download

Paper Nr: 200
Title:

Fever Status Detection using Artificial Neuron Network

Authors:

Linos Nchena and Dagmar Janacova

Abstract: This research paper proposes a monitoring system and a prototype that has been developed for detecting fever status in elderly people or other populations requiring continuous specialist care. With various issues affecting the health of the senior citizens, there is an imperative requirement to have a continuous monitor health status. The monitoring system is beneficial as it will make it feasible to enable the real time detection of fever and thus allowing for the early treatment. Delaying treatment can lead to the underlining health issue going beyond the remediable condition. Thus, quick detection is vital. There are various issues that might causes illness in people. Some of the issues include virus outbreak, seasonal infections, disease, and old age. In this paper our focus is mainly on old age. This group of people is much more at risk of getting ill or frequently need more attention. In this project, the presence of fever or illness has been detected by using artificial intelligence (AI). The AI technique that is utilized in this project is artificial neural networks. The computation is done by first training the system and then secondly validating the trained system. After the training, the system is supplied with a new set of data, with a known state, to validate that the training was successful. To validate the system, it is provided with sample data to test its efficiency. If the system is well trained the validation data would label that data correctly. That label is known before the validation test, as the sample data had known labels. These known labels were not given to training but not validation system. The system is function properly if its label matched the sample data label. The conducted experiment demonstrated a successful detection with an efficiency rate of 82 percent.
Download

Paper Nr: 212
Title:

How to Identify the Infeasible Test Requirements using Static Analyse? An Exploratory Study

Authors:

João C. Neto, Allan Mori, Ricardo F. Vilela, Thelma E. Colanzi and Simone R. S. de Souza

Abstract: Context: Software testing is an essential activity to ensure the quality of the software. However, the selection and generation of test cases can be an expensive and hard task. A large number of infeasible test requirements (e.g. infeasible paths) collaborate to increase the effort on test data generation, and it is not a trivial task to identify them. Objective: To investigate and analyze an process of properties of infeasible test requirements identification in a static way without inputs data through an exploratory study. Methodology: We gathered a set of statistical properties to identify infeasible test requirements without the use of input data. We manually verified the identification process using a benchmark with 19 Java programs. Results and conclusions: The alternative process identified infeasible requirements without using input data and proved effective. This study highlights the tester’s role in the process of identifying the infeasible elements and also the need to automate this process because level of complexity in decision making.
Download

Paper Nr: 226
Title:

Strategies for Electric Location-routing Problems Considering Short and Long Term Horizons

Authors:

Victor V. Corrêa, André D. Santos and Thiago H. Nogueira

Abstract: Recent climate data has risen attention to many problems related to the global warming effect caused by the emission of greenhouse gases. The rise in the global average temperature has many consequences and it is very close to the established threshold in which, immediate actions must be taken or the damage to our planet will be irreversible. The transportation sector, responsible for 23% of the global CO2 emissions, and the public power, aware of the situation, have been trying to innovate and solutions such as electric vehicles are getting much attention and growing in popularity. This work aims to help logistic companies by proposing a metaheuristic algorithm and a novel methodology for the planning of electric vehicle infrastructures composed by battery recharging stations and battery swap stations. Different from previous works, we consider a long-term horizon planning by using the proposed algorithm itself to pre-process data and improve results by considering the synergy of long-term location and short-term routing problems. Computational experiments shows that our algorithm is able to reduce the cost of electric vehicles infrastructures compared to previous work.
Download

Paper Nr: 250
Title:

Streetwise: Mapping Citizens’ Perceived Spatial Qualities

Authors:

Moreno Colombo, Jhonny Pincay, Oleg Lavrovsky, Laura Iseli, Joris Van Wezemael and Edy Portmann

Abstract: Streetwise is the first map of spatial quality of urban design of Switzerland. Streetwise measures the human perception of spatial situations and uses crowdsourcing methods for this purpose: a large number of people are shown pairs of street-level images of public space online; by clicking on an image, they each give an evaluation about the place they consider has a better atmosphere, which is the focus of this article. With the gathered data, a machine learning model was trained, which allowed learning features that motivate people to choose one image over another. The trained model was then used to estimate a score representing the perceived atmosphere in a large number of images from different urban areas within the Zurich metropolitan region, which could then be visualized on a map to offer a comprehensive overview of the atmosphere of the analyzed cities. The accuracy obtained from the evaluation of the machine learning model indicates that the method followed can perform as well as a group of humans.
Download

Paper Nr: 7
Title:

NEWRITER: A Text Editor for Boosting Scientific Paper Writing

Authors:

João R. Bezerra, Luís W. Góes and Wladmir C. Brandão

Abstract: Nowadays, in the scientific field, text production is required from scientists as means of sharing their research and contribution to science. Scientific text writing is a task that demands time and formal writing skills and can be specifically slow and challenging for inexperienced researchers. In addition, scientific texts must be written in English, follow a specific style and terminology, which can be a difficult task specially for researchers that aren’t native English speakers or that don’t know the specific writing procedures required by some publisher. In this article, we propose NEWRITER, a neural network based approach to address scientific text writing assistance. In particular, it enables users to feed related scientific text as an input to train a scientific-text base language model into a user-customized, specialized one. Then, the user is presented with real-time text suggestions as they write their own text. Experimental results show that our user-customized language model can be effectively used for scientific text writing assistance when compared to state-of-the-art pre-trained models.
Download

Paper Nr: 13
Title:

Variational Autoencoder for Anomaly Detection in Event Data in Online Process Mining

Authors:

Philippe Krajsic and Bogdan Franczyk

Abstract: The analysis of event data recorded by information systems is becoming increasingly relevant. An increasing data-centric analysis of processes by using process mining techniques has a direct impact on the management of business processes. To achieve a positive impact on business process management, a high quality data basis is important. This paper presents an approach for the application of variational autoencoder for the filtering of anomalous event data in an online process mining environment, which help to improve the results of process mining techniques and thus positively influence business process management. For anomaly detection in an unsupervised environment, mass-volume and excess-mass scores are used as metrics. The results are compared on the basis of established algorithms such as one-class support vector machine, isolation forest and local outlier factor. These insights are used to highlight the benefits of this approach for process mining and business process management.
Download

Paper Nr: 30
Title:

Visual Analytics for Industrial Sensor Data Analysis

Authors:

Tristan Langer and Tobias Meisen

Abstract: Due to the increasing digitalization of production processes, more and more sensor data is recorded for subsequent analysis in various use cases (e.g. predictive maintenance). The analysis and utilization of this data by process experts raises optimization potential throughout the production process. However, new analysis methods are usually first published as non-standardized Python or R libraries and are therefore not available to process experts with limited programming and data management knowledge. It often takes years before those methods are used in ERP, MES and other production environments and the optimization potential remains idle until then. In this paper, we present a visual analytics approach to facilitate the inclusion of process experts into analysis and utilization of industrial sensor data. Based on two real world exemplary use cases, we define a catalog of requirements and develop a tool that provides dedicated interactive visualizations along methods for exploration, clustering and labeling as well as classification of sensor data. We then evaluate the usefulness of the presented tool in a qualitative user study. The feedback given by the participants indicates that such an approach eases access to data analysis methods but needs to be integrated into a comprehensive data management and analysis process.
Download

Paper Nr: 32
Title:

Ontology-based Approach for Business Opportunities Recognition

Authors:

Vinicius F. Salgado, Diego L. Santos, Frederico C. Dutra, Fernando S. Parreiras and Wladmir C. Brandão

Abstract: The Web is the main source of business related information due to its accessibility, diversity and huge size, resulting from the high degree of collective engagement. However, extracting relevant information from this vast environment to use in decision making by organizational staff is a great challenge. In particular, the gathering of information related to business opportunities and its effective treatment to extract pieces of useful information to predict consumer and market behavior is essential for organizational survival. Although some approaches for handling business related information from Web have been proposed in literature, they under exploit contextual semantic patterns for information extraction, e.g., the set of properties related to the business opportunity topic. The present article proposes an ontology-based approach to recognize business opportunities from business related news extracted from the Web. Experimental results show that of our approach can effectivelly recognize business opportunities, reaching up to 90% of accuracy.
Download

Paper Nr: 81
Title:

Exploring the Relationships between Data Complexity and Classification Diversity in Ensembles

Authors:

Nathan F. Garcia, Frederico Tiggeman, Eduardo N. Borges, Giancarlo Lucca, Helida Santos and Graçaliz Dimuro

Abstract: Several classification techniques have been proposed in the last years. Each approach is best suited for a particular classification problem, i.e., a classification algorithm may not effectively or efficiently recognize some patterns in complex data. Selecting the best-tuned solution may be prohibitive. Methods for combining classifiers have also been proposed aiming at improving the generalization ability and classification results. In this paper, we analyze geometrical features of the data class distribution and the diversity of the base classifiers to understand better the performance of an ensemble approach based on stacking. The experimental evaluation was conducted using 32 real datasets, twelve data complexity measures, five diversity measures, and five heterogeneous classification algorithms. The results show that stacked generalization outperforms the best individual base classifier when there is a combination of complex and imbalanced data with diverse predictions among weak learners.
Download

Paper Nr: 84
Title:

Machine Learning Algorithms for Breast Cancer Detection in Mammography Images: A Comparative Study

Authors:

Rhaylander M. Almeida, Dehua Chen, Agnaldo S. Filho and Wladmir C. Brandão

Abstract: Breast tumor is the most common type of cancer in women worldwide, representing approximately 12% of reported new cases and 6.5% of cancer deaths in 2018. Mammography screening are extremely important for early detection of breast cancer. The assessment of mammograms is a complex task with significant variability due to professional experience and human errors, an opportunity for assisting tools to improve both reliability and accuracy. The usage of deep learning in medical image analysis have increased, assisting specialists in early detection, diagnosis, treatment or prognosis of diseases. In this article, we compare the performance of XGBoost and VGG16 in the task of breast cancer detection by using digital mammograms from CBIS-DDSM dataset. In addition, we perform a comparison of prediction accuracy between full mammogram images and patches extracted from original images based on ROI annotated by experts. Moreover, we also perform experiments with transfer learning and data augmentation to exploit data diversity, and the ability to extract features and learn from raw unprocessed data. Experimental results show that XGBoost achieves 68.29% in AUC, while VGG16 achieves approximately the same performance of 68.24% in AUC.
Download

Paper Nr: 87
Title:

Systematic Selection and Prioritization of Communication Channels in the Healthcare Sector

Authors:

Francisco Casaca and André Vasconcelos

Abstract: Many industries are using multi-channel approaches to bring users and organizations closer. The services provided through these channels can be leveraged by using chatbots, allowing users to have simpler and more natural interactions. However, this entails designing architectures that make services available through multiple channels, while making the users’ experiences coherent, as they switch between them. Media Richness Theory introduces a way to classify the richness of communication channels, resorting to objective factors, however it does not provide a systematic channel selection and prioritization process. To address these needs, this work proposes a systematic approach to select and prioritize communication channels based on six factors: feedback, multiple cues, personal focus, language variety, accessibility and cost. To validate this approach, the systematic process is applied to three use cases in the healthcare domain.
Download

Paper Nr: 129
Title:

Social, Legal, and Technical Considerations for Machine Learning and Artificial Intelligence Systems in Government

Authors:

Richard Dreyling, Eric Jackson, Tanel Tammet, Alena Labanava and Ingrid Pappel

Abstract: Expansion of technology has led to governments increasingly reconciling with advanced technologies like machine learning and artificial intelligence. Research has covered the ethical considerations of AI as well as legal and technical aspects of the operation of these systems within the framework of government. This research is an introduction to the topic in the Estonian context which uses a multidisciplinary inquiry based in the theoretical framework of technology adoption and getting citizens to use these services for their benefit. (Suggest that there are first results as well).
Download

Paper Nr: 153
Title:

SPOT: Toward a Decision Guidance System for Unified Product and Service Network Design

Authors:

Joost Bottenbley and Alexander Brodsky

Abstract: A major deficiency in the manufacturing ecosystem today is the lack of cloud-based infrastructure that supports the combined decision making and optimization of product design, process design, and supply chain, as opposed to hard wired solutions within silos today. The reported work makes a step toward bridging this deficiency by developing a software framework, prototype and a case study for SPOT - a decision guidance system for simultaneous optimization and trade-off analysis of combined service and product networks, capable to express the combined product, process and supply chain design. SPOT allows users to express, as data input, a hierarchical assembly and composition virtual products and services, i.e., having fixed and control parameters that can be optimized. Virtual services produce a flow of virtual products, such as raw materials, parts of finished products. Like the virtual services, they are associated with analytic models that express customer-facing performance metrics and feasibility constraints, which are used for optimization. The uniqueness of our approach in SPOT is the use of modular simulation-like model for product and service networks, yet optimization quality and computational time of the best available mathematical programming solvers, which is achieved by symbolic computation of simulation code to generate lower-level mathematical programming models.
Download

Paper Nr: 162
Title:

Multi-object Tracking for Urban and Multilane Traffic: Building Blocks for Real-World Application

Authors:

Nikolajs Bumanis, Gatis Vitols, Irina Arhipova and Egons Solmanis

Abstract: Visual object detection and tracking is a fundamental research topic in computer vision with multiple applications ranging from object classification to multi-object tracking in heavy urban traffic scenarios. While object detection and tracking tasks, especially multi-object tracking, have multiple solutions, it is still unclear how to build the real-world applications using different building blocks like algorithms, filters, base neuron networks, etc. The issue becomes more sophisticated as most of the recently proposed solutions are based on existing methodologies, frameworks and applicable technologies; however, some are showing promising results using contradictory realization. This paper addresses issues and research trends of multi-object tracking, while depicting its building blocks and currently best solutions. In result, a potential building blocks for real-world application in the framework of Jelgava city in Latvia is presented.
Download

Paper Nr: 188
Title:

Electronic Circuits Extrinsic Evolutionary Platform

Authors:

Pedro G. Coelho, J. F. M. do Amaral and M. C. Bentes

Abstract: This paper presents an electronic circuit evolution platform based on genetic algorithms with different modes of operation. The platform has an extrinsic structure for evaluating individuals, making calls to a circuit simulator for each possible solution evaluated. The platform can perform evolutions in search of values for components, additional topologies to a fixed circuit and a search with total variation in the types of components, values and connections. The assessed fitness can be based on a single objective, evaluating only the output of the circuit, but also based on several objectives. The chosen method for this quantification of multiple objectives is based on a Fuzzy System in order to facilitate the designer's specification. The evolutions can be carried out in the time domain as well as in the frequency domain, being possible for the user to change the operating mode without changes in the code already created. The exchange between the operating modes, inputs used and the use of functions present on the platform is performed directly through configuration variables, without the need to change the source code of the platform. In order to verify the performance of the platform, each mode can be evaluated using different circuits with varying complexities. Some selected case studies are shown in the paper to corroborate the feasibility of the method.
Download

Paper Nr: 218
Title:

Hybrid Prototypical Networks Augmented by a Non-linear Classifier

Authors:

Anas El Ouardi, Maryem Rhanoui, Anissa Benlarabi and Bouchra El Asri

Abstract: Text classification is one of the most prolific domains in machine learning. Present in a raw format all around us in our daily life Starting from human to human communication mainly by the social networks apps, arriving at the human-machine interaction especially with chatbots, text is a rich source of information. However, despite the remarkable performances that deep learning achieves in this field, the cost in therm of the amount of data needed to train this model still considerably high, adding to that the need of retraining this model to learn every new task. Nevertheless, a new sub-field of machine learning has emerged, named meta-learning it targets the overcoming of those limitations, widely used for image-related tasks, it can also bring solutions to tasks associated with text. Starting from this perspective we proposed a hybrid architecture based on well- known prototypical networks consisting of adapting this model to text classification and augmenting it with a non-linear classifier.
Download

Paper Nr: 233
Title:

Analysing Clustering Algorithms Performance in CRM Systems

Authors:

Indrit Enesi, Ledion Liço, Aleksander Biberaj and Desar Shahu

Abstract: Customer Relationship Management technology plays an important role in business performance. The main problem is the extraction of valuable and accurate information from large customers’ transactional data sets. In data mining, clustering techniques group customers based on their transaction’s details. Grouping is a quantifiable way to analyse the customers’ data and distinguish customers based on their purchases. Number of clusters plays an important role in business intelligence. It is an important parameter for business analysts. In this paper the performance of K-means and K-medoids algorithm will be analysed based on the impact of the number of clusters, number of dimensions and distance function. The Elbow method combined with K-means algorithm will be implemented to find the optimal number of clusters for a real data set from retail stores. Results show that the proposed algorithm is very effective when customers need to be grouped based on numerical and nominal attributes.
Download

Area 3 - Information Systems Analysis and Specification

Full Papers
Paper Nr: 39
Title:

Improve Classification of Commits Maintenance Activities with Quantitative Changes in Source Code

Authors:

Richard R. Mariano, Geanderson D. Santos and Wladmir C. Brandão

Abstract: Software maintenance is an important stage of software development, contributing to the quality of the software. Previous studies have shown that maintenance activities spend more than 40% of the development effort, consuming most part of the software budget. Understanding how these activities are performed can support managers to previously plan and allocate resources. Despite previous studies, there is still a lack of accurate models to classify software commits into maintenance activities. In this work, we deepen our previous work, in which we proposed improvements in one of the state-of-art techniques to classify software commits. First, we include three additional features that concern the size of the commit, from the state-of-art technique. Second, we propose the use of the XGBoost, one of the most advanced implementations of boosting tree algorithms, and tends to outperform other machine learning models. Additionally, we present a deep analysis of our model to understand their decisions. Our findings show that our model outperforms the state-of-art technique achieving more than 77% of accuracy and more than 64% in the Kappa metric.
Download

Paper Nr: 116
Title:

A Risk Management Framework for Scrum Projects

Authors:

Samuel S. Lopes, Rogéria C. Gratão de Souza, Allan G. Contessoto, André Luiz de Oliveira and Rosana V. Braga

Abstract: Software changes constantly to suit the market volatility, causing risks to the project. Agile software development approaches, such as Scrum, have been proposed to deal with constant changes in project requirements. In Scrum, the Product Owner (PO) is responsible for managing such changes and ensuring that the developed software brings significant value to the customers. However, there are potential risks involved in these responsibilities. If not properly managed, they can lead to project failure. In this paper, we introduce a novel approach to managing risks involving PO’s roles. In our work, we tailored the risk management knowledge area from the Project Management Body of Knowledge Guide into the Scrum. We established a framework called RIsk Management PRoduct Owner (RIMPRO), which intends to support project teams to systematically manage risks related to PO activities that may arise during the project. As proof of concept, the processes described in RIMPRO were evaluated by potential users. Through a preliminary assessment, we observed that RIMPRO is promising since it can assist teams in managing risks involving PO in a systematized and effective manner.
Download

Paper Nr: 119
Title:

Digital Legacy Management Systems: Theoretical, Systemic and User’s Perspective

Authors:

Eduardo A. Yamauchi, Cristiano Maciel, Fabiana F. Mendes, Gustavo S. Ueda and Vinicius C. Pereira

Abstract: There are now relatively new systems and functionalities aimed at digital legacy management. In this paper, our objective is to analyze the domain of digital legacy management systems from three perspectives: the theoretical, the systemic and the users’. Due to the complexity of those systems, these perspectives were analyzed jointly and in an exploratory approach. Therefore, this article proposes the following classification of digital legacy management systems: dedicated systems and integrated systems. The innovative results from this study allow software developers to better understand important issues concerning the complex cultural practices in this domain, thus contributing to a rich discussion on those systems, their requirements and limits to their development.
Download

Paper Nr: 138
Title:

A PMBoK Extension Proposal for Data Visualization in Software Project Management

Authors:

Julia C. Couto, Josiane Kroll, Duncan Ruiz and Rafael Prikladnicki

Abstract: Although the human brain stores images more easily than text, most of the tools adopted for software project management are based on textual reports. The number of software projects that fail is huge, and the lack of understanding of the project complexity by the stakeholders is among the reasons for project failure. Data visualization techniques and tools can help to identify the project issues and reduce misunderstandings. In this paper, we investigate how project management can benefit from data visualization. To do so, we adopted a hybrid research approach composed by a systematic mapping study, a survey, and three focus group sessions. As a result, we identify a set of the 16 visualization techniques and tools that can be used to support software project management and we propose a PMBoK extension that provides a reference for practitioners who are planning to use data visualization to support software project management.
Download

Paper Nr: 143
Title:

Evaluating Random Input Generation Strategies for Accessibility Testing

Authors:

Diogo O. Santos, Vinicius S. Durelli, Andre T. Endo and Marcelo M. Eler

Abstract: Mobile accessibility testing is the process of checking whether a mobile app can be perceived, understood, and operated by a wide range of users. Accessibility testing tools can support this activity by automatically generating user inputs to navigate through the app under evaluation and run accessibility checks in each new discovered screen. The algorithm that determines which user input will be generated to simulate the user interaction plays a pivotal role in such an approach. In the state of the art approaches, a Uniform Random algorithm is usually employed. In this paper, we compared the results of the default algorithm implemented by a state of the art tool with four different biased random strategies taking into account the number of activities executed, screen states traversed, and accessibility violations revealed. Our results show that the default algorithm had the worst performance while the algorithm biased towards different weights assigned to specific actions and widgets had the best performance.
Download

Paper Nr: 147
Title:

C++ Web Framework: A Web Framework for Web Development using C++ and Qt

Authors:

Herik Lima and Marcelo M. Eler

Abstract: The entry barrier for web programming may be intimidating even for skilled developers since it usually involves dealing with heavy frameworks, libraries and lots of configuration files. Moreover, most web frameworks are based on interpreted languages and complex component interactions, which can hurt performance. Therefore, the purpose of this paper is to introduce a lightweight web framework called C++ Web Framework (CWF). It is easy to configure and combine the high performance of the C++ language, the flexibility of the Qt framework, and a tag library called CSTL (C++ Server Pages Standard Tag Library), which is used to handle dynamic web pages while keeping the presentation and the business layer separated. Preliminary evaluation gives evidence that CWF is easy to use and present good performance. In addition, this framework was used to develop real world applications that support some business operations of two private organizations.
Download

Paper Nr: 151
Title:

Technical Debt Tools: A Systematic Mapping Study

Authors:

Diego Saraiva, José G. Neto, Uirá Kulesza, Guilherme Freitas, Rodrigo Reboucas and Roberta Coelho

Abstract: Context: The concept of technical debt is a metaphor that contextualizes problems faced during software evolution that reflect technical compromises in tasks that are not carried out adequately during their development - they can yield short-term benefit to the project in terms of increased productivity and lower cost, but that may have to be paid off with interest later. Objective: This work investigates the current state of the art of technical debt tools by identifying which activities, functionalities and kind of technical debt are handled by existing tools that support the technical debt management in software projects. Method: A systematic mapping study is performed to identify and analyze available tools for managing technical debt based on a set of five research questions. Results: The work contributes with (i) a systematic mappping of current research on the field, (ii) a highlight of the most referenced tools, their main characteristics, their supported technical debt types and activities, and (iii) a discussion of emerging findings and implications for future research. Conclusions: Our study identified 50 TD tools where 42 of them are new tools, and 8 tools extend an existing one. Most of the tools address technical debt related to code, design, and/or architecture artifacts. Besides, the different TD management activities received different levels of attention. For example, TD identification is supported by 80% of the tools, whereas 30% of them handle the TD documentation activity. Tools that deal with TD identification and measurement activities are still predominant. However, we observed that recent tools focusing on TD prevention, replacement, and prioritization activities represent emergent research trends.
Download

Paper Nr: 155
Title:

A Well-founded Ontology to Support the Preparation of Training and Test Datasets

Authors:

Lucimar L. Moura, Marcus A. da Silva, Kelli F. Cordeiro and Maria C. Cavalcanti

Abstract: In the knowledge discovery process, a set of activities guide the data preprocessing phase, one of them is the data transformation from raw data to training and test data. This complex and multidisciplinary phase involves concepts and structured knowledge in distinct and particular ways in the literatures and specialized tools, demanding data scientists with suitable expertise. In this work, we present PPO-O, a reference ontology of the data preprocessing operators, to identify and represent the semantics of the concepts related to the data preprocessing phase. Moreover, the ontology highlights data preprocessing operators to the preparation of the training and test datasets. Based on PPO-O, Assistant-PP tool was developed, which made it capable to capture the retrospective data provenance during the execution of data preprocessing operators, facilitating the reproducibility and explainability of the dataset created. This approach might be helpful to non-experts users in data preprocessing.
Download

Paper Nr: 160
Title:

Evaluation of Non-Functional Requirements for IoT Applications

Authors:

Joseane V. Paiva, Rossana C. Andrade and Rainara M. Carvalho

Abstract: Internet of Things (IoT) is a paradigm that enables physical objects to interact and to work together. IoT applications have particular characteristics, such as context-awareness, interconnectivity, and heterogeneity, and particular types of interaction, user interaction with devices (called human-thing interaction), and the interaction between devices (called thing-thing interaction). These characteristics represent the expectation around the system and are also known as Non-Functional Requirements (NFRs). So, during the requirements elicitation of such systems, they can appear as NFRs, and their combination often increases the complexity of the IoT application development and evaluation. Thus, this work aims to identify which approaches and NFRs have been considered in the literature to evaluate IoT applications and the main challenges faced by the evaluators. We use the systematic mapping methodology to provide a comprehensive view of approaches, methods, tools, and processes. As a result, we identified two tools, six approaches, one method, and one process that can be used to evaluate NFRs, a set of 42 NFRs that can be considered for IoT applications, and the main challenges related to the NFRs evaluation for the IoT applications.
Download

Paper Nr: 201
Title:

Querying Brazilian Educational Open Data using a Hybrid NLP-based Approach

Authors:

Marco Antoni, Andrea S. Charão and Maria H. Franciscatto

Abstract: The need for capturing information suitable to the user has favored the development of Question Answering (QA) systems, whose main goal is retrieving a precise answer to a question expressed in Natural Language. Thus, these systems have been adopted in many domains to make data accessible, including Open Data. Although there are many QA approaches that access Open Data sources, querying Brazilian Open Data is still a research gap, possibly motivated by the complexity that Portuguese language presents to Natural Language Processing (NLP) approaches. For this reason, this paper proposes a hybrid NLP-based approach for querying Open Data of Brazilian Educational Census. The proposed solution is based on a combination of linguistic and rule-based NLP approaches, that are applied in two main processing stages (Text Preprocessing and Question Mapping) to identify the meaning of an input question and optimize the querying process. Our approach was evaluated through a QA prototype developed as a Web interface and showed feasible results, since concise and accurate answers were presented to the user.
Download

Paper Nr: 232
Title:

Exploring Big Data Analytics Adoption using Affordance Theory

Authors:

Veena Bansal and Shubham Shukla

Abstract: This research explores big data analytics adoption in organisations using affordance theory. Big data analytics are a set of tools and techniques that help companies to get useful business insights from the data. Adoption of big data analytics is a challenging task. Affordance theory has been used to study usage and effect of information technology. In this work, we have modified the affordance theory framework to study adoption of big data analytics. The framework takes into account characteristics of the technology, the goal and characteristics of the organisation. Organisation achieve different outcomes based on their goals and characteristics. We have used case study method to verify efficacy of the adopted framework. The results clearly show that the framework is effective in studying the adoption of big data analytics.
Download

Paper Nr: 244
Title:

Empirical Research on Customer Communication Challenges in the Companies Adopting Agile Practices

Authors:

Paolo Ciancarini, Shokhista Ergasheva, Ilyuza Gizzatullina, Vladimir Ivanov, Sergey Masyagin and Giancarlo Succi

Abstract: One of the most critical aspects of Software Development Process is the Requirements Engineering process and defining the correct and understandable requirements in Agile methodology. Hence, Requirements Engineering in agile directly effect the overall project success. This paper demonstrates a research study about the usage of Agile methods in the set of industrial companies located in Russia. The survey gives insights about different aspects of the method: communication challenges and issues arising during the Software Requirements Engineering phase in particular the challenges in the communication with the customers. To investigate these issues the paper presents an analysis of the state of the art done with the help of the research survey. The results of the interview sessions are summarized and the set of suggestions to overcome the challenges are proposed. 30 representatives from 20 different companies who are mainly Product owners and Product Managers participated in the survey. As the results indicate, the communication is always a key challenge for the companies. The analysis of particular qualities of the communication field in the context of rapidly changing Software Development environment helped to define the outcomes related to the customer communication.
Download

Short Papers
Paper Nr: 24
Title:

Controlling Personal Data Flow: An Ontology in the COVID-19 Outbreak using a Permissioned Blockchain

Authors:

Paulo H. Alves, Isabella Z. Frajhof, Fernando A. Correia, Clarisse de Souza and Helio Lopes

Abstract: Data protection regulations emerged to set rights and duties in managing personal data. Hence, they have created a new challenge. Systems must comply with legal obligations whenever the processing of personal data takes place. From the controller’s perspective, attending to such norms can be defying, as it demands a detailed and holistic knowledge of the data processing activity. From the data subject point of view, controlling and following the data flow is also complex, as many entities can be authorized to access and use one’s personal data. To mitigate information asymmetry and comply with data protection regulations, we developed an ontology to identify the entities involved in personal data processing. The ontology aims to build relationships between them and to share a common understanding of rights and duties proposed by the Brazilian Data Protection Law under the COVID-19 pandemic context. Moreover, the permissioned blockchain technology emerged as a solution to manage privacy concerns and to allow the compliance to such Law. We also developed a conceptual model using such technology and provided a data governance approach to set a standard so that the reuse becomes more accurate.
Download

Paper Nr: 31
Title:

How to Mock a Bear: Honeypot, Honeynet, Honeywall & Honeytoken: A Survey

Authors:

Paul Lackner

Abstract: In a digitized world even critical infrastructure relies on computers controlled via networks. Attacking these sensitive infrastructures is highly attractive for intruders, who are frequently a step ahead of defenders. Honey systems (honeypots, honeynets, honeywalls, and honeytoken) seek to counterbalance this situation. Honey systems trap attackers by generating phoney services, nets, or data, thereby preventing them from doing damage to production systems and enable defenders to study attackers without letting intruders initially notice. This paper provides an overview of existing technologies, their use cases, and pitfalls to bear in mind by illustrating various examples. Furthermore, it shows the recent efforts made in the field and examines the challenges that still need to be solved.
Download

Paper Nr: 47
Title:

Project Management Processes Used during the Development of Software Projects in Home Office Format: A Field Research in Multinational It Companies

Authors:

Laura C. Claro, Ana D. Ferreira and Alessandra S. Dutra

Abstract: Due to the consequences of the pandemic caused by the new coronavirus, home office has been adopted by companies to enable the continuity of their activities in an emergency and preventive feature. A survey has been conducted through interviews with ten project managers from multinational IT companies, in order to analyse and understand how project management processes are performed during the development of software projects in home office format. Impacts were identified in different aspects, positively and negatively, in the project management processes. Concerning software development teams, it was possible to observe that the greatest impact is related to communication and start using online tools for daily activities. We have also learned that there are differences between the consequences of the home office and consequences of the pandemic. And there are permanent changes and lessons learned during the management in pandemic.
Download

Paper Nr: 55
Title:

Software Product Line Traceability and Product Configuration in Class and Sequence Diagrams: An Empirical Study

Authors:

Thais S. Nepomuceno and Edson OliveiraJr

Abstract: A set of systems that share common and variable parts is called a Software Product Line (SPL). These kind of systems are usually part of the same market segment. Their elements that vary are what allow the diversification among products from the same family, thus managing variability is an important issue of SPL engineering. There are few studies in the literature that evaluate and compare approaches to SPL variability management in UML-based SPLs. In this work, two of the existing approaches, SMarty and Ziadi et al., are compared throughout an experiment to verify: the effectiveness in configuring products based on UML class and sequence diagrams; the influence of the participants knowledge on UML, SPL and variability in the effectiveness results; and how traceability is performed in each approach. Results show the SMarty approach is statically superior with relation to Ziadi et al. for the effectiveness at configuring products with class and sequence diagrams. Regarding the knowledge level needed to a better effectiveness, SMarty demands less knowledge than Ziadi et al. In addition, Ziadi et al. provides no means to round-trip trace variabilities in class and sequence diagrams, thus SMarty was previously designed to allow it.
Download

Paper Nr: 58
Title:

Challenges Women in Software Engineering Leadership Roles Face: A Qualitative Study

Authors:

Karina Kohl and Rafael Prikladnicki

Abstract: Software engineering is not only about technical solutions. To a large extent, it is also concerned with organizational issues, project management, and human behavior. There are serious gender issues that can severely limit the participation of women in science and engineering careers. It is claimed that women lead differently than men and are more collaboration-oriented, communicative, and less aggressive than their male counterparts. However, when talking with women in technology companies’ leadership roles, a list of problems women face grows fast. We invite women in software engineering management roles to answer the questions from an empathy map canvas. We used thematic analysis for coding the answers and group the codes into themes. From the analysis, we identified seven themes that support us to list the main challenges they face in their careers.
Download

Paper Nr: 59
Title:

A System Architecture in Multiple Views for an Image Processing Graphical User Interface

Authors:

Roberto S. Maciel, Michel S. Soares and Daniel O. Dantas

Abstract: Medical images are important components in modern health institutions, used mainly as a diagnostic support tool to improve the quality of patient care. Researchers and software developers have difficulty when building solutions for segmenting, filtering and visualizing medical images due to the learning curve, complexity of installation and use of image processing tools. VisionGL is an open source library that facilitates programming through the automatic generation of C++ wrapper code. The wrapper code is responsible for calling parallel image processing functions or shaders on CPUs using OpenCL, and on GPUs using OpenCL, GLSL and CUDA. An extension to support distributed processing, named VGLGUI, involves the creation of a client with a workflow editor and a server, capable of executing that workflow. This article presents a description of architecture in multiple views, using the architectural standard ISO/IEC/IEEE 42010:2011, the 4+1 View Model of Software Architecture and the Unified Modeling Language (UML), for a visual programming language with parallel and distributed processing capabilities.
Download

Paper Nr: 66
Title:

Challenges in using Machine Learning to Support Software Engineering

Authors:

Olimar T. Borges, Julia C. Couto, Duncan Ruiz and Rafael Prikladnicki

Abstract: In the past few years, software engineering has increasingly automating several tasks, and machine learning tools and techniques are among the main used strategies to assist in this process. However, there are still challenges to be overcome so that software engineering projects can increasingly benefit from machine learning. In this paper, we seek to understand the main challenges faced by people who use machine learning to assist in their software engineering tasks. To identify these challenges, we conducted a Systematic Review in eight online search engines to identify papers that present the challenges they faced when using machine learning techniques and tools to execute software engineering tasks. Therefore, this research focuses on the classification and discussion of eight groups of challenges: data labeling, data inconsistency, data costs, data complexity, lack of data, non-transferable results, parameterization of the models, and quality of the models. Our results can be used by people who intend to start using machine learning in their software engineering projects to be aware of the main issues they can face.
Download

Paper Nr: 68
Title:

A Blockchain-based Architecture for Enterprise Ballot

Authors:

Paulo H. Alves, Isabella Frajhof, Élisson M. Araújo, Yang R. Miranda, Rafael Nasser, Gustavo Robichez, Alessandro Garcia, Cristiane Lodi, Flavia Pacheco and Marcus Moreno

Abstract: Enterprise ballots are usually applied to support the decision-making process in voting-related scenarios. They allow its members to manifest their opinion and settle their position in regards to a specific topic, such as the approval of budgets and the acquisition of goods and services. Even though we are living in a data-driven society, highly digitized, enterprise ballots still rely on a paper based process. Thus, migrating to an electronic voting system, in which all the resolution process happens online, triggers various issues on verifiability, correctness and secrecy. Blockchain plays a vital role in this environment, as it is able to provide a trustable and secure enterprise decision-making system. Therefore, we developed BallotBR, an enterprise ballot system under a permissioned blockchain platform, to address all the requirements based on a challenging enterprise consortium context. This consortium is representative of many consortia across the oil and gas industry and other domains. Furthermore, we contrasted the open-source proposals available in the literature with the BallotBR needs. Also, we discussed how our solution addresses security and trustworthiness requirements usually faced in e-voting systems.
Download

Paper Nr: 69
Title:

Using Combined Techniques for Requirements Elicitation: A Brazilian Case Study

Authors:

Naiara C. Alflen, Ligia C. Santos, Edmir V. Prado and Alexandre Grotta

Abstract: Within the requirements engineering domain, requirements elicitation (RE) is one of the most difficult phases. Towards a successful and high-quality software development process, RE often suffers from information challenges such as ambiguity, incompleteness, and inconsistent data. Within this context, this research aims to analyze the contribution of RE combined techniques of both i) the elicitation of functional requirements (FR), and ii) non-functional requirements (NFR) at an Information Systems Higher Education (IS) course. Via a systematic literature review (RSL), 61 articles crawled from the Scopus database that meets the RE search criteria were fully reviewed and finally generated the list of RE. The top three REs (Interview, Prototyping, and Brainstorming) were then used to support the IS course case study with 56 students. Results showed that combined FR and NFR techniques improved the RE completeness and consistency when compared to every single technique isolatedly.
Download

Paper Nr: 80
Title:

Design and Implementation of a Test Tool for PSD2 Compliant Interfaces

Authors:

Gloria Bondel, Josef Kamysek, Markus Kraft and Florian Matthes

Abstract: The Revised Payment Services Directive (PSD2) forces retail banks to make customer accounts accessible to TPPs via standardized and secure ”Access to Account” (XS2A) interfaces. Furthermore, banks have to ensure that these interfaces continuously meet functional and performance requirements, hence testing is very important. A known challenge in software testing is the design of test cases. While standardized specifications and derived test cases exist, the actual implementations of XS2A interfaces often deviate, leading to the need to adapt existing or create new test cases. We apply a design science approach, including five expert interviews, to iteratively generate a concept of a test tool that enables testing of several XS2A interface implementations with the same set of test cases. The concept makes use of files mapping deviations between the standardized specification and the implemented interfaces. We demonstrate the concept’s feasibility by implementing a prototype and testing its functionality in a sandbox setting.
Download

Paper Nr: 85
Title:

Using Binary Strings for Comparing Products from Software-intensive Systems Product Lines

Authors:

Mike Mannion and Hermann Kaindl

Abstract: The volume, variety and velocity of products in software-intensive systems product lines is increasing. One challenge is to understand the range of similarity between products to evaluate its impact on product line management. This paper contributes to product line management by presenting a product similarity evaluation process in which (i) a product configured from a product line feature model is represented as a weighted binary string (ii) the overall similarity between products is compared using the Jaccard Coefficient similarity metric (iii) the significance of individual features and feature combinations to product similarity is explored by modifying the weights. We propose a method for automatically allocating weights to features depending on their position in a product line feature model, although we do not claim that this allocation method nor the use of the Jaccard Coefficient is optimal. We illustrate our ideas with mobile phone worked examples.
Download

Paper Nr: 91
Title:

Lex-Libras: Morphosyntactic Model of the Brazilian Sign Language to Support a Context-based Machine Translation Process

Authors:

Antônio M. Silva, Tanya A. Felipe, Laura S. García, Diego R. Antunes and André P. Guedes

Abstract: Brazilian Sign Language (BSL–Libras) is the preferential language of Deaf communities in Brazil. The Human-Computer Interaction Architecture in Sign Language (HCI-SL) was proposed, which will offer user-system interaction in BSL. In 2015 this architecture had its formal model developed, together with the phonological sign decomposition. The work described here advanced proposing morphological rules. Differently from American Sign Language, Libras has a group of verbs that inflect, so translators need to represent this phenomena in the automatic generation process and reflect it in its output, the 3D Avatar. Related Portuguese-BSL translators available have not yet been able to generate correct BSL sentences with regard to these verbs. This paper presents the Lex-Libras modeling process, a set of rules in the form of a Context-Free Grammar capable of describing the morphosyntactic level of BSL. These rules compose the morphosyntactic model of architecture in the Brazilian Portuguese to Brazilian Sign Language semiautomatic translation process through an avatar.
Download

Paper Nr: 94
Title:

An Empirical Study on the Impact of Aspect-oriented Model-driven Code Generation

Authors:

André Menolli, Luan S. Melo, Maurício M. Arimoto and Andreia Malucelli

Abstract: Over the years innovative approaches in software development have been proposed. Among the main approaches, we can highlight aspect-oriented software development. However, applying aspect-oriented software development is not simple, but may be facilitated by the model-driven development, mainly because it is possible to build models to drive consolidated aspect solutions. In this context, we analyzed the impact of aspect-oriented solutions created from a model-driven approach. To this end, a model-driven approach to create aspect-oriented code was proposed and an experiment focusing on data persistence was conducted. From data gathering, we empirically discuss the impact of the generated solutions compared to oriented-object solutions. Some code metrics were analyzed using quantitative analysis and the results show that the approach may help to reuse aspect-oriented solutions and improve the code quality and productivity.
Download

Paper Nr: 97
Title:

Towards a Data-Driven Requirements Elicitation Tool through the Lens of Design Thinking

Authors:

José C. de Souza Filho, Walter T. Nakamura, Lígia M. Teixeira, Rógenis P. da Silva, Bruno F. Gadelha and Tayana U. Conte

Abstract: Data-Driven Requirements Engineering (DDRE) proposes that software requirements development goes beyond the application of traditional elicitation techniques (e.g., interviews and questionnaires) by considering other sources of data, such as user reviews available on app stores, social networks, and forums. While many studies are looking for requirements mining and automatic classification through machine learning, information retrieval, and natural language processing algorithms, few studies investigate how to support software practitioners who will use this knowledge in practice, for instance, through tools to support the process. In this context, Design Thinking (DT) emerged as a promising approach to design user-centered solutions to this problem. Thus, in this paper, we conducted an exploratory study to investigate how DT benefits the development of a data-driven requirements elicitation tool. To do so, we applied the Double Diamond process, having in mind Brown’s DT Cycles, supported by a set of DT techniques. Our results indicate that DT techniques can be integrated into the development process, allowing a better understanding of the problem and supporting the development of user-centered solutions. We provide the benefits and drawbacks of adopting DT as a toolbox in the context of DDRE tools.
Download

Paper Nr: 114
Title:

A Literature-based Derivation of a Meta-framework for IT Business Value

Authors:

Sarah Seufert, Tobias Wulfert, Jan Wernsdörfer and Reinhard Schütte

Abstract: The business value of IT in companies is a highly discussed topic in information systems research. While the IT business value is an agreed upon term, its decomposition and assessment on a more detailed level is ambiguous in literature and practice. However, assessing the IT business value is pivotal for goal-oriented IT management. Therefore, we suggest a hierarchical decomposition of the IT business value along aggregated impacts and atomic impacts. We introduce a taxonomy to gain a better understanding of what types of atomic impacts may be caused by IT investments. With the help of the taxonomy, we classify a total of 957 values from existing value catalogs and derive 29 archetypal IT impacts grouped by a company’s business units. Bundling this grouping with exemplary impacts for the IT value assessment, we finally propose an IT value meta-framework for the structured business value assessment.
Download

Paper Nr: 115
Title:

Investigating Information about Software Requirements in Projects That Use Continuous Integration or Not: An Exploratory Study

Authors:

Rafael Nascimento, Luana Souza, Pablo Targino, Gustavo Sizílio, Uirá Kulesza and Márcia Lucena

Abstract: Continuous Integration (CI) is a development practice that involves the automation of compilation and testing procedures, increasing the frequency of code integration and the delivery of new features and providing improvements in software quality. Open Source Software (OSS) projects are increasingly associated with the use of CI practices. However, the literature has not yet explored how and if this practice can influence the presence and the types of artifacts and information related to requirements. Thus, this study aimed to investigate the presence, types of artifacts, and information related to requirements found in projects on GitHub, in particular projects that use CI. An exploratory methodology was used to identify and classify the requirements artifacts where the result shows that projects that adopt the CI have, in general, a more amount of requirements artifacts, mainly in artifacts of the GitHub platform such as issues, pull requests, and labels.
Download

Paper Nr: 117
Title:

A Self-protecting Approach for Service-oriented Mobile Applications

Authors:

Ronaldo R. Martins, Marcos O. Camargo, William F. Passini, Gabriel N. Campos and Frank J. Affonso

Abstract: The evolution of software systems in the last 10 years has brought new challenges for the development area, especially for service-oriented Mobile Applications (MobApps). In the mobile computing domain, the integration of MobApps into service-based systems has been a feasible alternative to boost the capacity of processing and storage of such applications. In parallel, this type of application needs monitoring approaches mainly due to the need of dealing with a large number of users, continuous changes in the execution environment, and security threats. Besides that, most MobApps do not present the self-protecting property by default, resulting in a number of adverse situations, such as integrity of execution, reliability, security, and adaptations at runtime. The principal contribution of this paper is an approach based on MAPE-K (Monitor-Analyze-Plan-Execute over Knowledge) loop and machine learning techniques to ensure self-protecting features in MobApps, in particular, those based on services. Experimental results showed that this approach can autonomously and dynamically mitigate threats, making these applications more trustworthy and intrusion-safe. Our approach has good potential to contribute to the development of MobApps, going beyond existing approaches.
Download

Paper Nr: 124
Title:

Communication Channels in Brazilian Software Projects: An Analysis based on Case Study

Authors:

Leandro Z. Rezende, Edmir V. Prado and Alexandre Grotta

Abstract: Technology project management is challenging. However, there are few works in the literature related to communication channels (CC) and project success. Therefore, this research aims to analyze the influence of communication channels on the short and medium-term success of software development projects in a Brazilian enterprise. This research is based on a literature review about communication channels and project success. The research has a qualitative and descriptive approach and used an ex-post-facto strategy. Ten software development project management professionals were interviewed at a large banking institution in the first half of 2019. This research confirmed a positive association between CC and software project success when considering efficiency, impact to the customer, and project staff. Besides, we also identified the two most relevant CC for the context studied and identified a CC not mentioned in the literature.
Download

Paper Nr: 130
Title:

Project based on Agile Methodologies by DMAIC

Authors:

Bianca G. Salvadori, Patricia F. Magnago and Alessandra S. Dutra

Abstract: The demand for the inclusion of Agile Methodologies in technology products and services, particularly in software development, has become increasingly recurrent. Its application guarantees frequent deliverables, but not necessarily the desired quality, especially when dealing with the technical challenge of rework and employee behavior. These are known challenges in the management of this methodology. Based on the DMAIC method, a customized software development project for a client was analyzed using Agile Methodologies. The proposed objective fulfilled its role of analyzing the process in the development cycle. This was achieved by diagnosing gaps in the processes involved in the treatment of 61 identified bugs, data collection, feedback from the parties involved, and mapping opportunities for improvement, such as the implementation of FDD, to achieve contour actions.
Download

Paper Nr: 154
Title:

Lessons Learned from a Lean R&D Project

Authors:

Bianca Teixeira, Bruna Ferreira, André Damasceno, Simone J. Barbosa, Cassia Novello, Hugo Villamizar, Marcos Kalinowski, Thuener Silva, Jacques Chueke, Hélio Lopes, André Kuramoto, Bruno Itagyba, Cristiane Salgado, Sidney Comandulli, Marinho Fischer and Leonardo Fialho

Abstract: In a partnership between academia and industry, we report our experience in applying the Lean R&D approach in the IAGO project, an oil and gas machine-learning-based dashboard. The approach is grounded in continuous experimentation through agile development, beginning with a Lean Inception workshop to define a Minimal Viable Product. Then, after technical feasibility assessments and conception phases, Scrum-based development begins, continually testing business hypotheses. We discuss our experiences in following the approach, and report developers' perceptions gathered through interviews and user evaluations of the final product. We found that the Lean Inception works well for aligning expectations and objectives among stakeholders, but it is not enough to level the domain knowledge among developers. The participation of end users in the workshop and throughout the project, as well as constant communication among all stakeholders, is very important to deliver appropriate solutions.
Download

Paper Nr: 158
Title:

Design Thinking Techniques Selection in Software Development: On the Understanding of Designers and Software Engineers Choices

Authors:

Lauriane Pereira, Rafael Parizi, Sabrina Marczak and Tayana Conte

Abstract: Design Thinking (DT) is a concept that promises increased innovativeness through a more user-centered approach. DT offers a mindset, working spaces, and techniques to support the generation of ideas and transform those into solutions. However, the selection of DT techniques is a complex endeavor since it needs to take into account the problem context and nature, user profile, among other characteristics. In addition, little is known about how do professionals make their selection. This paper reports on a focus group study with professionals working in software development. We used the Cynefin framework combined with the Double Diamond model to explore the process of selection of DT techniques for hypothetical scenarios. We found that the professionals need to respect the default domain to set their strategies and allow insights to emerge.
Download

Paper Nr: 183
Title:

Asynchronous Data Provenance for Research Data in a Distributed System

Authors:

Benedikt Heinrichs and Marius Politze

Abstract: Many provenance systems assume that the data flow is being directly orchestrated by them or logs are present which describe it. This works well until these assumptions do not hold anymore. The Coscine platform is a way for researchers to connect to different storage providers and annotate their stored data with discipline-specific metadata. These storage providers, however, do not inform the platform of externally induced changes for example by the user. Therefore, this paper focuses on the need of data provenance that is not directly produced and has to be deduced after the fact. An approach is proposed for dealing with and creating such asynchronous data provenance which makes use of change indicators that deduce if a data entity has been modified. A representation on how to describe such an asynchronous data provenance in the Resource Description Framework (RDF) is discussed. Finally, a prototypical implementation of the approach in the Coscine use-case is described and the future steps for the approach and prototype are detailed.
Download

Paper Nr: 193
Title:

Using Blockchain to Trace PDO/PGI/TSG Products

Authors:

Luis Alves, Tiago Carvalhido, Estrela F. Cruz and António M. Rosado da Cruz

Abstract: For helping preserve the cultural traditions of populations and their social and economic sustainability, the European Union created a set of denominations such as “Protected Designation of Origin” (PDO), “Protected Geographical Indication” (PGI) and “Traditional Specialty Guaranteed” (TSG), for certifying and guaranteeing a set of characteristics of the region in the product and/or manufacturing process. In this paper, a blockchain-based traceability platform is proposed, to trace PDO/PGI/TSG products from their source to the final consumers, using Hyperledger Fabric. This platform enables the transparent registration of activities throughout the value chain and provides the traceability information demanded by informed consumers while, at the same time, helps in avoiding forgeries.
Download

Paper Nr: 203
Title:

Database-Conscious End-to-End Testing for Reactive Systems using Containerization

Authors:

Denton Wood and Tomáš Černý

Abstract: Reactive systems are a relatively new paradigm in computer science architecture with important implications for computer science. While much attention has been paid to effectively running end-to-end (E2E) tests on these architectures, little work has considered the implications of tests which modify the database. We propose a framework to group and orchestrate E2E tests based on data qualities across a series of parallel containerized application instances. The framework is designed to run completely independent tests in parallel while being mindful of system costs. We present a conceptual version of the framework and discuss future directions with this work.
Download

Paper Nr: 206
Title:

A Hybrid Approach using Progressive and Genetic Algorithms for Improvements in Multiple Sequence Alignments

Authors:

Geraldo D. Zafalon, Vitoria Z. Gomes, Anderson R. Amorim and Carlos R. Valêncio

Abstract: The multiple sequence alignment is one of the main tasks in bioinformatics. It is used in different important biological analysis, such as function and structure prediction of unknown proteins. There are several approaches to perform multiple sequence alignment and the use of heuristics and meta-heuristics stands out because of the search ability of these methods, which generally leads to good results in a reasonable amount of time. The progressive alignment and genetic algorithm are among the most used heuristics and meta-heuristics to perform multiple sequence alignment. However, both methods have disadvantages, such as error propagation in the case of progressive alignment and local optima results in the case of genetics algorithm. Thus, this work proposes a new hybrid refinement phase using a progressive approach to locally realign the multiple sequence alignment produced by genetic algorithm based tools. Our results show that our method is able to improve the quality of the alignments of all families from BAliBase. Considering Q and TC quality measures from BaliBase, we have obtained the improvements of 55% for Q and 167% for TC. Then, with these results we can provide more biologically significant results.
Download

Paper Nr: 222
Title:

Success Factors of Business Intelligence and Performance Dashboards to Improve Performance in Higher Education

Authors:

Asmaa Abduldaem and Andy Gravell

Abstract: The need for more effective communication becomes more important as the size of an organisation increases. This underlines the importance of using tools like Business Intelligence (BI) and dashboards to monitor and improve their output, as well as to improve accuracy and efficiency of the data that is available. However, there is a lack of understanding of applying analytics and strategic insight into analytics in Higher Education (HE), compared to other sectors such as business, government, and healthcare. In addition, the use of BI and dashboards in HE has been studied by a small number of papers, which is particularly limited in investigating the factors to ensure successful application within this context or understanding the metrics that determine this success. This highlights the importance of understanding successful adoption of such technologies to improve performance and decision-making processes, particularly within HE institutions. In this paper, we concentrate on investigating successful adoption of business intelligence and department-related level of tactical dashboards to support performance measurement and decision-making processes in HE. As the research area is complex and multidimensional, the triangulation method has been applied to support a rich set of data and a mixture of a qualitative approach to gather insights into potential factors, and a quantitative approach to confirm these factors. By adapting the concept of Balanced scorecard to measure the success factors, we conjecture that it would enhance successful adoption within this sector.
Download

Paper Nr: 235
Title:

Transitions in Information Systems Development: SME's Issues and Challenges

Authors:

Lakshminarayana Kompella

Abstract: Organizations experience external pressures such as changing technologies and faster time-to-market, which drive them to make changes. We can refer to these changes as transitions. The organizations that use cloud infrastructure leverage faster application availability at reduced cost and pay-per-usage of features advantages to reduce their total cost of ownership (TCO). TCO manifests as external pressure on organizations that develop on-premises software applications. To stay competitive, these organizations either need to migrate their applications to the cloud or change their existing on-premises software application. This paper considers the latter of bringing changes for a successful transition. The software application development involves social and technical aspects, and change must include both these aspects. To examine the change as a phenomenon, we need to examine it in its settings, and a case-study method is best suited. The selected case is a Small and Medium Enterprise (SME) with on-premises application development in human capital management. The findings indicate that agility is necessary to stay competitive. For agility, across different stages of its value chain, associated contexts come into play, which requires appropriate social and technical changes and not necessarily migrate to cloud-based development. To reduce TCO, a change in the form of adopting open-source technologies is a necessity. Further, for the changed on-premises application to provide competitiveness, apart from managing prevalent external pressures, the organization must manage debt, which comprises technical and social changes.
Download

Paper Nr: 238
Title:

Technical Due Diligence as a Methodology for Assessing Risks in Start-up Ecosystems

Authors:

Iván Sanz-Prieto, Luis de-la-Fuente-Valentín and Sergio Rios-Aguilar

Abstract: The dynamics of transformations that the world is experiencing at global dimensions due to the intensity of technological changes demand sophisticated management tools to assess risks in the business and industrial sectors, aimed at ensuring investment security. The objective of this article is to analyse and propose the technical Due Diligence as a methodology to assess risks in Start-up ecosystems. The method used was mixed; a quantitative approach, and the qualitative approach, supported by a literature review with bibliographic arches. The sample was composed of thirty (30) experts, to whom a survey was applied, and to (10) of them, an interview that was subjected to a process of triangulation of the information, which was supported by documentary arches. The results showed the need to identify technological risks (product, service and process); commercial risks regarding the scalability of the business; and financial, legal, fiscal and environmental risks as part of a comprehensive and integral procedure.
Download

Paper Nr: 243
Title:

Psychological Contracts in Business Process Transformation Effect: Structure of Psychological Contracts

Authors:

Kayo Iizuka and Chihiro Suematsu

Abstract: This paper presents the result of analysis of the psychological contract structure in business process transformation (BPT) project. Much research has contributed to improving the effectiveness of BPT. As for cases in Japan, there are some unique issues around changing business processes, and the overall business processes, including the work of back offices throughout all industries, are not always efficient, as Japan's labor productivity was still the lowest among G7 members in 2018, according to the Japan Productivity Center. Therefore, besides the BPT methodologies used in Western countries, it is necessary to seek a counterpart for the unique issue of BPT in Japan. In this paper, the authors focus on the relationship between the psychological contract and BPT effect as one of the success factors of BPT based on a survey they conducted in 2019 and 2020.
Download

Paper Nr: 10
Title:

In Continuous Software Development, Tools Are the Message for Documentation

Authors:

Theo Theunissen, Stijn Hoppenbrouwers and Sietse Overbeek

Abstract: In Continuous Software Development, a wide range of tools are used for all steps in the life cycle of a software product. Information about the software product is distributed across all those tools and not stored in a central repository. To better understand the software products, the following media elements must be taken into account: the types of information, the tools, tool-stacks and ecosystems to manage the (types of) information, and the amount of structure. In the tile, “tools” refers to the phrase “the medium is the message”, coined by McLuhan and Fiore (1967) pointing that the medium should be subject of investigation as well as the content of the message. In this paper the tools include tool stacks, ecosystems, the types of information and amount of structure; they define the content of the message. Our approach to present relevant information to different stakeholders is rooted in understanding and utilizing these aspects. In this respect, the amount of structural variety of information defines the value for information creation and retrieval, including the tools to process that information. Documentation is considered an information type that is processed through tools in a software development ecosystem.
Download

Paper Nr: 12
Title:

SMartyTesting: A Model-Based Testing Approach for Deriving Software Product Line Test Sequences

Authors:

Kleber L. Petry, Edson OliveiraJr, Leandro T. Costa, Aline Zanin and Avelino F. Zorzo

Abstract: Code reuse and testing approaches to ensure and to increase productivity and quality in software development has grown considerably among process models in recent decades. Software Product Line (SPL) is a technique in which non-opportunistic reuse is the core of its development process. Given the inherent variability in products derived from an SPL, an effective way to ensure the quality of such products is to use testing techniques, which take into account SPL variability in all stages. There are several approaches for SPL variability management, especially those based on the Unified Modeling Language (UML). The SMarty approach provides users identification and representation of variability in UML models using stereotypes and tagged-values. SMarty currently offers a verification technique for its models, such as sequence diagrams, in the form of checklist-based inspections. However, SMarty does not provide a way to validate models using, for example, Model-Based Testing (MBT). Thus, this paper presents SMartyTesting, an approach to assist the generation of test sequences from SMarty sequence diagrams. To evaluate the feasibility of such an approach, we performed an empirical comparative study with an existing SPL MBT approach (SPLiT-MBt) using activity diagrams, taking into account two criteria: sequence differentiation, and number of sequences generated. Results indicate that SMartyTesting is feasible for generating test sequences from SMarty sequence diagrams. Preliminary evidence relies on generating more test sequences using sequence diagrams than activity diagrams, thus potentially increasing SPL coverage.
Download

Paper Nr: 126
Title:

Performance Variability Analysis on Road Accident in Yangon

Authors:

Kyi P. Hlaing, Nyein T. Aung, Swe Z. Hlaing and Koichiro Ochimizu

Abstract: Myanmar is the second-highest death toll of road accidents in Southeast Asia according to the WHO data published Road Traffic Accidents Deaths in 2018. This study performs Functional Resonance Analysis Method (FRAM) to analyze road accidents in Yangon as an alternative approach to safety management. Firstly, a basic FRAM model is presented that shows the road accident functions. Secondly, this paper develops a quantitative model to predict the functional upstream-downstream coupling by using Naive Bayesian algorithm. Finally, this study performs the Cross-Tabulation analysis and Chi-Square Test method to determine that there is an association between the outputs of the functions. The result proves that the upstream function’s output variability affects the downstream function’s output variability. The variability of road accident factors such as accident place, accident season, accident time, and type of vehicle affects the severity level of the road accident.
Download

Paper Nr: 223
Title:

PLASMA: Platform for Auxiliary Semantic Modeling Approaches

Authors:

Alexander Paulus, Andreas Burgdorf, Lars Puleikis, Tristan Langer, André Pomp and Tobias Meisen

Abstract: In recent years, the impact and usability of semantic data management has increased continuously. Still, one limiting factor is the need to create a semantic mapping in the form of a semantic model between data and a used conceptualization. Creating this mapping manually is a time-consuming process, which especially requires to know the used conceptualization in detail. Thus, the majority of recent approaches in this research field focus on a fully automated semantic modeling approach, but reliable results cannot be achieved in all use cases, i.e., a human must step in. The needed subsequent manual adjustment of automatically created models has already been mentioned in previous works, but has, in our opinion, received too little attention so far. In this paper, we treat the involvement of a human as an explicit phase of the semantic model creation process, called semantic refinement. Semantic refinement comprises the manual improvement of semantic models generated by automated approaches. In order to additionally enable dedicated research in this direction in the future, we also present PLASMA, a platform to utilize existing and future modeling approaches in a consistent and extendable environment. PLASMA aims to support the development of new semantic refinement approaches by providing necessary supplementary functionalities.
Download

Area 4 - Software Agents and Internet Computing

Full Papers
Paper Nr: 46
Title:

A Hybrid IoT Analytics Platform: Architectural Model and Evaluation

Authors:

Theo Zschörnig, Jonah Windolph, Robert Wehlitz and Bogdan Franczyk

Abstract: Data analytics are an integral part of the utility and growth of the Internet of Things (IoT). The data, which is generated from a wide variety of heterogenous smart devices, presents an opportunity to gain meaningful insights into different aspects of everyday lives of end-consumers, but also into value-adding processes of businesses and industry. The advancements in streaming and machine learning technologies in the past years may further increase the potential benefits that arise from data analytics. However, these developments need to be enabled by the underlying analytics architectures, which have to address a multitude of different challenges. Especially in consumer-centric application domains, such as smart home, there are different requirements, which are influenced by technical, but also legal or personal constraints. As a result, analytics architectures in this domain should support the hybrid deployment of analytics pipelines at different network layers. Currently available approaches lack the needed capabilities. Consequently, in this paper, we propose an architectural solution, which enables hybrid analytics pipeline deployments, thus addressing several challenges described in previous scientific literature.
Download

Paper Nr: 53
Title:

An Endogenous and Self-organizing Approach for the Federation of Autonomous MQTT Brokers

Authors:

Marco A. Spohn

Abstract: Many applications for the Internet of Things (IoT) use the publish/subscribe (P/S) communication paradigm. Among the most representative protocols, there is MQTT. Its basic architecture relies on a single server/broker: publishers send data topics to the broker, and then it forwards the data to subscribers. Having a single server may make things easier for configuration and management; however, there is room for a potential bottleneck, besides being a single point of failure. Clustering servers usually address scalability in MQTT broker deployment; however, most solutions are proprietary. Meanwhile, autonomous brokers could be federated together to scale and increase availability. A self-organizing federation proposal already available in the literature implies substantial changes to brokers’ inner implementation; furthermore, there has not been any implementation yet. This work explores an endogenous federation approach: design a supporting agent (called federator) that realizes the brokers’ federation based on the native P/S mechanism. There are no implied modifications to regular/standard brokers, but it requires changes to the client-side (i.e., publishers and subscribers). This work presents a primary architecture and an initial case study to grasp some fundamentals and benefits of adopting the proposed solution.
Download

Paper Nr: 167
Title:

Similarity-inclusive Link Prediction with Quaternions

Authors:

Zuhal Kurt, Ömer N. Gerek, Alper Bilge and Kemal Özkan

Abstract: This paper proposes a Quaternion-based link prediction method, a novel representation learning method for recommendation purposes. The proposed algorithm depends on and computation with Quaternion algebra, benefiting from the expressiveness and rich representation learning capability of the Hamilton products. The proposed method depends on a link prediction approach and reveals the significant potential for performance improvement in top-N recommendation tasks. The experimental results indicate the superior performance of the approach using two quality measurements – hits rate, and coverage - on the Movielens and Hetrec datasets. Additionally, extensive experiments are conducted on three subsets of the Amazon dataset to understand the flexibility of this algorithm to incorporate different information sources and demonstrate the effectiveness of Quaternion algebra in graph-based recommendation algorithms. The proposed algorithms obtain comparatively higher performance, they are improved with similarity factors. The results show that the proposed quaternion-based algorithm can effectively deal with the deficiencies in graph-based recommender system, making it a preferable alternative among the other available methods.
Download

Paper Nr: 190
Title:

LISSU: Integrating Semantic Web Concepts into SOA Frameworks

Authors:

Johannes Lipp, Siyabend Sakik, Moritz Kröger and Stefan Decker

Abstract: In recent years, microservice-based architectures have become the de-facto standard for cloud-native applications and enable modular and scalable systems. The lack of communication standards however complicates reliable information exchange. While syntactic checks like datatypes or ranges are mostly solved nowadays, semantic mismatches (e.g., different units) are still problematic. Semantic Web Services and their derivatives tackle this challenge but are mostly too ambitious for practical use. In this paper, we propose Lightweight Semantic Web Services for Units (LISSU) to support Semantic Web experts in their collaboration with domain experts. LISSU allows developers specify semantics for their services via URI ontology references, and automatically validates these before initiating communication. It automatically corrects unit mismatches via conversions whenever possible. A real-world demonstrator setup in the manufacturing domain proves that LISSU leads to a more predictable communication.
Download

Short Papers
Paper Nr: 33
Title:

Process-aware Decision Support Model for Integrating Internet of Things Applications using AHP

Authors:

Christoph Stoiber and Stefan Schönig

Abstract: Following the trend of Industry 4.0 and Cyber-Physical Systems (CPS), many industrial companies perform costly projects to integrate Internet of Things (IoT) applications aiming at beneficial business process improvements. However, deciding on the right IoT projects is challenging and often based on unilateral assessments that lack the required profoundness. A suitable method for deciding on specific IoT applications is required that incorporates the desired goals and considers the underlying process details. We therefore propose a structured decision model that considers IoT application clusters, anticipated Business Process Improvement (BPI) goals, and details of the process where the application should be implemented. At first, specific IoT application clusters are developed by conducting an extensive literature review. These clusters are examined regarding several characteristic such as their value proposition or technical aspects. Using this information, an Analytical Hierarchy Process (AHP) model is proposed, that incorporates the main objective, relevant BPI dimensions, and the formulated application clusters. To validate our approach, we applied the model to an actual business process of a leading industrial company.
Download

Paper Nr: 83
Title:

Exploring Differential Privacy in Practice

Authors:

Davi G. Hasuda and Juliana M. Bezerra

Abstract: Every day an unimaginable amount of data is collected from Internet users. All this data is essential for designing, improving and suggesting products and services. In this frenzy of capturing data, privacy is often put at risk. Therefore, there is a need for considering together capturing relevant data and preserving the privacy of each person. Differential Privacy is a method that adds noise in data in a way to keep privacy. Here we investigate Differential Privacy in practice, aiming to understand how to apply it and how it can affect data analysis. We conduct experiments with four classification techniques (including Decision Tree, Näive Bayes, MLP and SVM) by varying privacy degree in order to analyze their accuracy. Our initial results show that low noise guarantees high accuracy; larger data size is not always better in the presence of noise; and noise in the target does not necessary disrupt accuracy.
Download

Paper Nr: 92
Title:

Using Academic Genealogy for Recommending Supervisors

Authors:

Gabriel Madeira, Eduardo N. Borges, Giancarlo Lucca, Washington Carvalho-Segundo, Jonata C. Wieczynski, Helida Santos and Graçaliz Dimuro

Abstract: Selecting an academic supervisor is a complicated task. Masters and Ph.D. candidates usually select the most prestigious universities in a given region, investigate the graduate programs in a research area of interest, and analyze the professors’ profiles. This choice is a manual task that requires extensive human effort, and usually, the result is not good enough. In this paper we propose a Recommender System that enables one to choose an academic supervisor based on his/her academic genealogy. We used metadata of different theses and dissertations and applied the nearest centroid model to perform the recommendation. The obtained results showed the high precision of the recommendations, which supports the hypothesis that the proposed system is a useful tool for graduate students.
Download

Paper Nr: 103
Title:

Faceshield HUD: Extended Usage of Wearable Computing on the COVID-19 Frontline

Authors:

Mateus C. Silva, Ricardo R. Oliveira, Thiago D’Angelo, Charles B. Garrocho and Vicente P. Amorim

Abstract: Wearable Computing brings up novel methods and appliances to solve various problems in society’s routine tasks. Also, it brings the possibility of enhancing human abilities and perception throughout the execution of specialist activities. Finally, the flexibility and modularity of wearable devices allow the idealization of multiple appliances. In 2020, the world faced a global threat from the COVID-19 pandemic. Healthcare professionals are directly exposed to contamination and therefore require attention. In this work, we propose a novel wearable appliance to aid healthcare professionals working on the frontline of pandemic control. This new approach aids the professional in daily tasks and monitors his health for early signs of contamination. Our results display the system feasibility and constraints using a prototype and indicate initial restrictions for this appliance. This proposal also works as a benchmark for the aid in health monitoring in general hazardous situations.
Download

Paper Nr: 191
Title:

Searching for Weak Signals in the Web to Support Scenarios Building for Future Studies

Authors:

Rodrigo D. Santos, Edilson Ferneda, Hercules Antonio do Prado, Aluizio H. Filho, Ana P. Bernardi da Silva, Roseane Salvio and Elaine C. Marcial

Abstract: A specification of a multi-agent system for searching weak signals on Internet is proposed for supporting future studies. A set of software artifacts are presented, including: (i) functional/non-functional requirements; (ii) use-cases involving human and software agents; (iii) a database model containing the main entities; and (iv) a role and functional model with the main interactions and activities the agents are involved. A prototype were developed and evaluated by experts in future studies. From this specification/prototype, an early warning system can be developed to support intelligence analysts in producing qualified information for future studies.
Download

Area 5 - Human-Computer Interaction

Full Papers
Paper Nr: 37
Title:

Knowledge Sharing Live Streams: Real-time and On-demand Engagement

Authors:

Leonardo G. Fonseca and Simone J. Barbosa

Abstract: Live streams have been gaining importance in Human-Computer Interaction research and practice. A specific type of these broadcasts is the knowledge sharing live stream (KSLS). Embrapa (Brazilian Agricultural Research Corporation) uses KSLSs to disseminate its research results. In this paper we investigate its audience engaged with the material at different moments. We monitored nine of Embrapa’s broadcasts, applied an online survey to the viewers, analyzed access statistics and conducted semi-structured interviews. Our goal was to contrast our findings in KSLS’s audience engagement in live and on-demand periods with the literature on this topic, answering the following research questions: How does the engagement of KSLSs viewers differ in real-time and on-demand? Which features could increase this engagement in these two different periods? In this way, according to our results, the takeaways of this work are i) the live period attracted the public more and promoted more interactions, ii) the live audience wishes that the video be made available on-demand, iii) new features, such as support for content documentation, multiple-choice questions, and temporal segmentation could increase the engagement in real-time and on-demand moments, and iv) our public did not have a large preference for interacting via audio in the chat.
Download

Paper Nr: 99
Title:

Vis2Learning: A Scenario-based Guide to Support Developers in the Creation of Visualizations on Educational Data

Authors:

Maylon P. Macedo, Ranilson A. Paiva, Isabela Gasparini and Luciana M. Zaina

Abstract: Information Visualization provides techniques to build graphics representation that enhance human perception from data. In educational data area, visualizations support professionals to analyze a great amount of data to then make decisions to improve the learning-teaching process. However, the visualizations of educational data often do not fulfill the needs of the end-users. In this paper, we present a scenario-based guidelines for the development of visualizations in e-learning context. For each scenario, we provide the chart format, describe its aim and characteristics and gives examples of its application in the e-learning context. Besides, we provide recommendations to bring improvements to the data visualized from the users interaction with that chart. We conducted an evaluation with 26 participants divided into two groups where one of them used our guidelines and the other did not. Our results revealed that the participants who used the guidelines were more successful in building the visualizations. They reported that the guidelines allow them to be concentrated on the main purpose of the visualizations. We saw that the participants background on e-learning or on the use of charts did not have influence in the building of suitable solutions which reinforce the usefulness of our guidelines.
Download

Paper Nr: 120
Title:

Assessing a Technology for Usability and User Experience Evaluation of Conversational Systems: An Exploratory Study

Authors:

Guilherme C. Guerino, Williamson F. Silva, Thiago A. Coleti and Natasha C. Valentim

Abstract: Conversational Systems (CS) are increasingly present in people’s daily lives. CS must provide a good experience and meet the needs of its users. Therefore, the Usability and User Experience (UX) evaluation is an appropriate step before making CSs available to society. To guide developers to identify problems, improvement suggestions, and user perceptions during CSs development, we developed a technology named Usability and User Experience Evaluation of Conversational Systems (U2XECS). U2XECS is a questionnaire-based technology that provides Usability and UX statement specifics to evaluate CSs. We conducted an exploratory study performed to evaluate and evolve U2XECS. Our results evidenced positive points of U2XECS related to ease of use, usefulness, and intentions to use. Moreover, we identified opportunities for improvement in U2XECS, such as ambiguous statements that generated misinterpretations in subjects.
Download

Paper Nr: 144
Title:

Mapping Personality Traits through Keystroke Analysis

Authors:

Felipe V. Goulart and Daniel O. Dantas

Abstract: Personality can be defined as a set of psychological features that may determine how to think, act, and feel, as well as may directly influence an individual’s interests. The Big Five model is widely used to describe the main traits of the personality of an individual. This study aims to develop an approach to identify personality traits from keystroke dynamics data using neural networks. We developed a non-intrusive approach to collect keystroke dynamics data from the users and used a self-assessment questionnaire of personality to identify Big Five personality traits. Experiments showed no evidence that the exclusive use of keystroke dynamics characteristics can provide enough information to identify an individual’s personality traits.
Download

Paper Nr: 148
Title:

An Interface Design Catalog for Interactive Labeling Systems

Authors:

Lucas Viana, Edson Oliveira and Tayana Conte

Abstract: Machine Learning (ML) systems have been widely used in recent years in different areas of human knowledge. However, to achieve highly accurate ML systems, it is necessary to train the ML model with data carefully labeled by specialists in the problem domain. In the context of ML and Human Computer Interaction (HCI), there are studies that propose interfaces that facilitate interactive labeling by domain specialists, in order to minimize effort and maximize productivity. This paper extends a previous secondary study that discusses some labeling systems. This paper proposes a catalog of design elements for the interface development of this type of system. We built the catalog based on the interface elements found in the studies analyzed in the previous secondary study. With this contribution, we expect to improve the development of better interfaces for interactive labeling systems and, thus, enhance the development of more accurate ML systems.
Download

Paper Nr: 185
Title:

End-user Evaluation of a Mobile Application Prototype for Territorial Innovation

Authors:

Eliza Oliveira, André C. Branco, Daniel Carvalho, Eveline Sacramento, Oksana Tymoshchuk, Luis Pedro, Maria J. Antunes, Ana M. Almeida and Fernando Ramos

Abstract: This study is part of a larger research effort taking place under the umbrella of CeNTER Program, an interdisciplinary project that aims to promote the development of the Centro Region of Portugal. The general contribution of this paper is the evaluation of a mobile application prototype that promotes collaboration between the various agents involved in Tourism, Health and Wellbeing. For the evaluation of the prototype, different methods were employed, which included the collection of quantitative and qualitative data. Quantitative data were obtained through the combination of two User Experience evaluation tools (SUS and AttrakDiff) and from usability metrics of effectiveness and efficiency, which are key factors related to the usability of a product. Qualitative data were obtained using the Think-aloud protocol, which allowed immediate feedback from end-users on their experience of interacting with the prototype. Although there are still several improvements to be addressed, the overall end-users’ opinions show that the CeNTER application is a sustainable and timely contribution, with an interesting potential to help foster community-led initiatives. The article offers a better understanding for the evaluation of mobile applications, which foster the same subject approached in this study.
Download

Short Papers
Paper Nr: 90
Title:

Digital Generation and Posthumous Interaction: A Descriptive Analysis in Social Networks

Authors:

Juliana M. Cabral, Cristiano Maciel, Vanice C. Cunha, Jivago M. Ribeiro and Daniele Trevisan

Abstract: The large number of data that can be left by someone when they die it’s undeniable, mainly on social network profiles, which are fed for years with varied information by the users. These profiles can serve as a way of remembering loved ones, and the way users are interacting with posthumous profiles can help in discovering how to deal with this new sensitive topic. Thus, this research seeks to investigate and understand the positioning of the Digital Generation on posthumous interaction in social networks and what are the main features that users find important for the design of profile pre-configurations. From the methodological viewpoint, the research used bibliographic review, development and application of online questionnaire and descriptive statistical analysis with data crossing to obtain the results. The results are compared with other generations participating in the research and with other published research on the subject.
Download

Paper Nr: 106
Title:

The Usability of Mobile Enterprise Resource Planning Systems

Authors:

Thomas Wüllerich and Alexander Dobhan

Abstract: This paper presents a model for end-user-based evaluation of the usability of mobile ERP systems. Recent studies show that the mobile use of ERP software is both, crucial for user satisfaction and still improvable for many ERP systems. Therefore, ERP-specific usability models are necessary to meet the requirements of ERP systems in comparison to e.g., apps for private use. The research objective is therefore to develop a model that enables software providers to measure and benchmark the usability of their software products. Therefore, we introduce after a literature research a usability model for the mobile application of ERP systems (mobile ERP). Our usability model is based on the widely used PACMAD model. We modify the PACMAD model for the context of ERP systems. This results in a new end-user-based model, that differs from existing models, because of its focus on end-users and the ERP context. Subsequently, the model will be tested in an initial study with 19 test persons. The results of the study indicate two main findings. Firstly, the model allows the measurement of the usability of mobile ERP systems. Secondly, some key factors substantially affect the usability of mobile ERP systems.
Download

Paper Nr: 134
Title:

It’s a Match! A Knowledge based Recommendation System for Matching Technology with Events

Authors:

Genildo Gomes, Isabelle Rêgo, Moises Gomes, Júlia Conceição, Artur Andrade, Tayana Conte, Thaís Castro and Bruno Gadelha

Abstract: The use of technologies to promote interaction and engagement at events is part of modern entertainment. In the different types of events, there are different technologies to increase this interaction. In scientific events, for example, organizers use voting platforms with the public; in music festivals, LED bracelets and the flash light of the smartphone are used; while in cultural and sports events, there are digital cheer leading thermometers. Considering the search by experts in the field of events and entertainment for new technologies, in this study, a recommendation system is proposed that relates different classification aspects of events in order to suggest a list of appropriate technologies for that event, based on knowledge bases built by the experience of experts. The proposed solution was evaluated through acceptance studies using the technology acceptance model (TAM), and interviews with six experts with experience in the area of production and organization of various events. Results indicate that users intend to use the platform to assist in the definition of technologies due to its innovative factor, among other information discussed in this paper.
Download

Paper Nr: 156
Title:

LogMe: An Application for Generating Logs in Immersive Interactions for UX Evaluation

Authors:

Victor Klisman, Luan S. Marques, João Pedro de Lima, Leonardo Marques, Genildo Gomes, Tayana Conte and Bruno Gadelha

Abstract: Immersive applications aim to stimulate interactions between the physical, virtual, and simulated world. Such applications stand out for transforming the public, often limited to a passive spectator, into an active participant in an event. Assessing the experience promoted by immersive applications is a challenge, as it involves difficulties inherent in the context of immersion. As an example, the user cannot be interrupted when he is immersed in the experience. In this sense, a non-intrusive way of collecting data is needed that can indicate whether the experience was positive or negative. The methods available in the literature are dependent on the spoken and observed reports of users during the interaction, but they are not applicable in all dimensions of evaluation and contexts of interaction, such as immersive events. The use of methods such as log capture can assist in the investigation of user interactions. In this work, we propose an application capable of recording logs from mobile devices while the user interacts with a certain immersive application. This will allow interactions to be recorded as they actually are, facilitating the investigation of the user’s feelings when performing a certain task.
Download

Paper Nr: 186
Title:

Stressed by Boredom in Your Home Office? On „Boreout“ as a Side-effect of Involuntary Distant Digital Working Situations on Young People at the Beginning of Their Career

Authors:

Ioannis Starchos and Anke Schüll

Abstract: The main focus of this paper is on boredom and boreout perceived by working novices driven into home office due to the covid-19-pandemic. Because this situation is exceptional, the impacts on a crisis of meaning, job boredom and a crisis of growth manifest themselves more clearly. Young people are the unit of interest within this paper, as a boreout could be devastating for their professional career. Leaning on recent literature, a qualitative analysis was conducted, followed by an anonymous online survey to test the viability of the approach. Only spare indicators for a crisis of meaning were identified, clear signals pointing towards boredom and strong indicators for a crisis of growth as well as evidence for coping strategies relying on various communication tools to compensate the lack of personal contact. This paper contributes to the body of knowledge by expanding research on boreout and by underlining the importance of its dimensions crisis of meaning, job boredom and crisis of growth. A moderating effect of IT-equipment and IT-support on establishing and maintaining connectivity in distant digital working situations became evident. This paper reports on work in progress, further research would be necessary to confirm the results.
Download

Paper Nr: 231
Title:

A Dempster–Shafer Big Data Readiness Assessment Model

Authors:

Natapat Areerakulkan and Worapol A. Pongpech

Abstract: Data-driven Transformation is a process where an organization transforms its infrastructure, strategies, operational methods, technologies, or organizational culture to facilitate and encourage data-driven decision-making behaviors. Most importantly is the ability to handle big data in the organization. Literature shown that assessing the big data readiness for the transformation of organizations in a systematically and logically model is a topic that have yet to be addressed. An ability to create a systematically and logically big data readiness assessment model is crucial to the progress of the transformation. Such model must also be able to handle uncertainty, which arises during the assessment due to various circumstances. To this end, we proposed a five tiers big data readiness assessment framework based on a Dempster–Shafer model to allow a comprehensible and a quantify readiness standing. We also presented a numerical example of our framework and model based on an organization that we have assessed prior.
Download

Paper Nr: 139
Title:

Augmented Reality Applied to Reducing Risks in Work Safety in Electric Substations

Authors:

Victor M. Rocha and Saul Delabrida

Abstract: Activities that involve electric energy are among the most dangerous and most harmful to the worker they perform. Therefore, the general objective of this work is to develop a virtual reality application that simulates the use of augmented reality technology as a means of access guidance and safety instructions with electric substation operators/maintainers. For this purpose, a newsletter, a simplified 3D electrical substation was modelled for experimentation in virtual reality and, thus, to evaluate a user experience regarding the use of the prototype and define what are the main requirements that can be used for the construction of the final application.
Download

Paper Nr: 141
Title:

Immersive UX: A UX Evaluation Framework for Digital Immersive Experiences in the Context of Entertainment

Authors:

Franciane Alves, Brenda Aguiar, Vinicius Monteiro, Elizamara Almeida, Leonardo Marques, Bruno Gadelha and Tayana Conte

Abstract: Digital Immersive Entertainment attracts thousands of people worldwide and can awaken new feelings and sensations in those who experience it. However, there is no standardized way of evaluating User eXperience (UX) and which UX measures should be considered in this context to determine whether the immersive experience was enjoyable and engaging for the audience. After considering how to evaluate the user experience in the context of immersive entertainment, we developed the Immersive UX, a UX evaluation framework considering important UX measures related to the evaluation of the immersive experience. In this sense, we based our framework on evaluating the following UX measures: flow, presence, and engagement. We carried out a study to investigate our framework’s feasibility by using it in a UX evaluation. This study examines how users felt when participating in a simulated cinema experience where they interacted with other people using different systems to support the immersive experience. We observed that our framework was able to capture what users feel when going through a systems-driven experience to support immersion. We were able to investigate users’ expectations and satisfaction, which allowed us to analyze whether the user’s immersive experience guided by digital systems was positive or not.
Download

Paper Nr: 192
Title:

Adaptive Complex Data-Intensive Web Systems via Object-oriented Paradigms: A Real-life Case Study

Authors:

Alfredo Cuzzocrea and Edoardo Fadda

Abstract: This paper focuses the attention on the emerging class of Adaptive Complex Data-Intensive Web Systems (ACDIWS), which start from classical Adaptive Web Systems (AWS) and add innovative characteristics of complex application scenarios and big Web data. One among the state-of-the-art results is represented by the OO-XAHM framework, which makes use of an object-oriented approach for achieving the adaptation effect over complex data-intensive Web systems. Along this fortunate line of research, this paper contributes to the research context with a real-life case study that shows the potentialities of OO-XAHM on the Web portal of the well-known Italian archaeological site Pompeii.
Download

Paper Nr: 217
Title:

Promising Technologies and Solutions for Supporting Human Activities in Confined Spaces in Industry

Authors:

Taynan R. Silva, Bruno N. Coelho and Saul E. Delabrida

Abstract: Although there is a growing concern about accidents and illnesses at work, global statistics still reveal alarming data. It is estimated that 2.78 million people die each year worldwide due to labor factors. In this regard, information systems and automation systems applied to occupational health and safety are gaining prominence for accident prevention. One of the high risk activities that have adopted technology as an ally is working in confined spaces. Currently, non-human entry robotic systems have been widely adopted for this purpose. However, these solutions do not always cover all scenarios, still requiring humans to perform tasks in those locations. Thus, it is expected that the new era of work in confined environments will be increasingly guided by hybrid systems of human-computer interaction. Some solutions from this perspective foresee the use of wearable computing, virtual reality and augmented reality to assist at these locations. The challenges in adopting these new technologies still consist in the characteristics of the environments themselves, which are hostile places with natural blockages for traditional communication and data transmission signals.
Download

Area 6 - Enterprise Architecture

Full Papers
Paper Nr: 23
Title:

Demonstrating GDPR Accountability with CSM-ROPA: Extensions to the Data Privacy Vocabulary

Authors:

Paul Ryan and Rob Brennan

Abstract: The creation and maintenance of a Register of Processing Activities (ROPA) are essential to meeting the Accountability Principle of the General Data Protection Regulation (GDPR). We evaluate a semantic model CSM-ROPA to establish the extent to which it can be used to express a regulator provided accountability tracker to facilitate GDPR/ROPA compliance. We show that the ROPA practices of organisations are largely based on manual paper-based templates or non-interoperable systems, leading to inadequate GDPR/ROPA compliance levels. We contrast these current approaches to GDPR/ROPA compliance with best practice for regulatory compliance and identify four critical features of systems to support accountability. We conduct a case study to analyse the extent that CSM-ROPA, can be used as an interoperable, machine-readable mediation layer to express a regulator supplied ROPA accountability tracker. We demonstrate that CSM-ROPA can successfully express 92% of ROPA accountability terms. The addition of connectable vocabularies brings the expressivity to 98%. We identify three terms for addition to the CSM-ROPA to enable full expressivity. The application of CSM-ROPA provides opportunities for demonstrable and validated GDPR compliance. This standardisation would enable the development of automation, and interoperable tools for supported accountability and the demonstration of GDPR compliance.
Download

Paper Nr: 45
Title:

Retailer’s Dual Role in Digital Marketplaces: Towards Architectural Patterns for Retail Information Systems

Authors:

Tobias Wulfert and Reinhard Schütte

Abstract: Multi-sided markets (MSMs) have entered the retail sector as digital marketplaces and have proven to be a successful business model compared to traditional retailing. Established retailers are increasingly establishing MSMs and also participate in MSMs of pure online companies. Retailers transforming to digital marketplaces orchestrate formerly independent markets and enable retail transactions between participants while simultaneously selling articles from an own assortment to customers on the MSM. The retailer’s dual role must be supported by the retail information systems. However, this support is not explicitly represented in existing reference architectures (RAs) for retail information systems. Thus, we propose to develop a RA for retail information systems facilitating the orchestration of supply- and demand-side participants, selling own articles, and providing innovation platform services. We apply a design science research approach and present seven architectural requirements that a RA for MSM business models needs to fulfill (dual role, additional participants, affiliation, matchmaking, variety of services, innovation services, and aggregated assortment) from the rigor cycle. From a first design iteration we propose three exemplary, conceptual architectural patterns as a solution for three of these requirements (matchmaking for participants, innovation platform services, and aggregated assortment).
Download

Paper Nr: 49
Title:

Does Fractal Enterprise Model Fit Operational Decision Making?

Authors:

Victoria Klyukina, Ilia Bider and Erik Perjons

Abstract: The paper reports on testing suitability of using a so-called Fractal Enterprise Model (FEM) in operational decision making. The project in which the testing has been done aimed at identifying areas for cost reduction improvements in a support department of a European branch of an international high-tech concern. The idea was to use modeling of the department’s operational activities on the intermediate level of details, just enough to identify the areas that need attention or provide an opportunity for cost reduction. FEM connects enterprise business processes with assets that are used in and are managed by these processes. It also allows to split a process into subprocesses in order to reach an intermediate level of details. The split is done by using a special type of assets called stock, which, for example, could be a stock of orders or a stock of parts to be used in an assembly process. The experience from the project shows that the level of details that has been achieved by using FEM is sufficient to understand the activities being completed by the department and identify possible ways for improvements. Furthermore, two generic patterns that can help to identify some areas of improvement have been established; these can be used in other projects.
Download

Paper Nr: 89
Title:

The Data Quality Index: Improving Data Quality in Irish Healthcare Records

Authors:

David Hickey, Rita O. Connor, Pauline McCormack, Peter Kearney, Roosa Rosti and Rob Brennan

Abstract: This paper describes the Data Quality Index (DQI), a new data quality governance method to improve data quality in both paper and electronic healthcare records. This is an important use case as digital transformation is a slow process in healthcare and hybrid systems exist in many countries such as Ireland. First a baseline study of the nature and extent of data quality issues in Irish healthcare records was conducted. The DQI model and tools were then developed, based on established data quality and data governance principles. Evaluation of the model and tools showed a significant improvement in data quality was achieved in a healthcare setting. This initial evaluation of the model was against paper healthcare records, but the model can also be used as part of an electronic healthcare record system.
Download

Paper Nr: 104
Title:

Synchronous and Asynchronous Requirements for Digital Twins Applications in Industry 4.0

Authors:

Rafael F. Vitor, Breno S. Keller, Débora M. Barbosa, Débora N. Diniz, Mateus C. Silva, Ricardo R. Oliveira and Saul D. S.

Abstract: The Industry 4.0 revolution brings up novel concepts and restraints when proposing and designing novel applications. Although its perspectives are new, the main restraints must also observe conservative constraints of the industrial processes, such as real-time capability and asynchronous design. Among the main tools to develop cutting-edge industrial applications, a novel relevant approach to presenting information and interacting with the Digital Twins (DTs) process. This work evaluates how to model and measure the primary Industry 4.0 constraints in designing novel applications using DTs. This work separates the restraints into two categories: asynchronous and synchronous requirements. First, this work designs a high-level DT system communication flow through a Petri Net model to analyze the asynchronous requirements. Then, it performs a synchronous test with a physical instance of the proposed model. The results display the requirements for safe operation on the case-study system regarding timing and modeling constraints.
Download

Paper Nr: 113
Title:

An Agri-Food Supply Chain Traceability Management System based on Hyperledger Fabric Blockchain

Authors:

Angelo Marchese and Orazio Tomarchio

Abstract: Consumers are nowadays very interested in food product quality and safety. It is challenging to track the provenance of data and maintain its traceability throughout the whole supply chain network without an integrated information system. For this purpose, Agriculture and Food (Agri-Food) supply chains are becoming complex systems which are responsible, in addition to track and store orders and deliveries, to guarantee transparency and traceability of the food production and transformation process. However, traditional supply chains are centralized systems, mainly depending on a third party for trading and trusting purposes. In this paper we propose a fully distributed approach, based on blockchain technology, to define a supply chain management system able to provide quality, integrity and traceability of the entire supply chain process. A prototype based on Hyperledger Fabric has been designed and developed in order to show the effectiveness of the approach and the coverage of the main use cases needed in a supply chain network.
Download

Paper Nr: 121
Title:

An Agile Approach for Modeling Enterprise Architectures

Authors:

Petrônio Medeiros, Alixandre Santana, Myllena Lima, Hermano Moura and Miguel Mira da Silva

Abstract: Organizations need a holistic view of their information assets in order to lead a digital transformation. Information assets can be obtained from modeling the Enterprise Architecture (EA) of the organization. However, the current EA modeling methods have been criticized for being heavy and rigid. In fact, EA modeling methods present similar problems to those faced by traditional software development (waterfall) methods. We propose that EA modeling could benefit from ideas presented in the Agile Manifesto. Previous research already pointed out the importance of finding the ”ideal” boundaries for the intersection between agile software development methods and EA practices. In this paper, we apply the Design Science Research Methodology (DSRM) to propose an Agile Enterprise Architecture Modeling Method based on agile principles and values. The proposal was demonstrated in two organizations, from which we extracted evidence such as continuous deliveries, short interactions (Sprints), proximity to customers, and systematic improvements in the process. We conclude that our proposal can improve EA modeling.
Download

Short Papers
Paper Nr: 21
Title:

Deriving a Process for Interorganizational Business Capability Modeling through Case Study Analysis

Authors:

Fatih Yílmaz, Julian Feldmeier and Florian Matthes

Abstract: To stay competitive in a globalized, constantly changing market environment with ongoing technological advancements, companies are not only focusing on their organization’s key capabilities but also collaborate more closely with partners, suppliers, customers, and also competitors. By analyzing an enterprise’s business capabilities, business leaders get an abstracted, holistic view of the organization and the alignment of its business model and visions with the IT. Further, business capabilities and visualizations can help to improve the communication with business partners. Therefore, different companies operating in the same industry collaboratively identify and model common business capabilities to define a shared ontology. Based on the knowledge gained through literature review carried out on the topic of business capability modeling, we conducted a multiple case study in this field. As a result, we derived a reference process for interorganizational business capability modeling which we evaluated by conducting semi-structured interviews with members of different interorganizational initiatives. The outcome of our research is an iterative process of modeling business capabilities in interorganizational collaborations.
Download

Paper Nr: 56
Title:

An Evolution-based Approach towards Next-Gen Defence HQ and Energy Strategy Integration

Authors:

Ovidiu Noran and Peter Bernus

Abstract: There is an increasing worldwide impetus towards a ‘nil emissions’ industry and energy production. While new technologies and materials have made the concept of renewable energy viable, there are still significant challenges in regards to the transition process in view of balancing the economic, security and environmental aspects. At the same time, the advent of the Internet of (Every)thing/s paradigm and the increasingly dynamic balance of power manifesting itself in various parts of the world have brought about the stringent need to evolve military defence doctrines, starting at the headquarters (command and control) level. As energy and national security clearly display a strong connection, it would be highly advisable to maintain this bond along the life of these two aspects, e.g. by evolving them observing similar principles and in a synchronised manner. This paper describes challenges faced by the two aspects and proposes a way forward that preserves and enhances the symbiosis necessary for a planned energy transition and effective national defence. Thus, while each region and nation will face specific geo-political issues, this paper initiates the process of elaborating a guiding framework (which can then be customised) meant to maintain the above-mentioned critical bond during the various possible transition stages, in a holistic and life cycle-based manner.
Download

Paper Nr: 75
Title:

An Open Platform for Smart Production: IT/OT Integration in a Smart Factory

Authors:

Dan Palade, Charles Moller, Chen Li and Soujanya Mantravadi

Abstract: As industries are becoming increasingly digitalized, new manufacturing concepts require redesigning the information systems architecture. The Smart Production Laboratory is used as a learning factory aimed at exploring new industry 4.0 technologies and for demonstrating Smart Production solutions. The initial Smart Production Laboratory was built on a proprietary software stack. Experimenting with the information systems architecture using proprietary systems has shown to be difficult, which is why we built a complete modular open-source software stack for the Smart Production Laboratory intended to enable high-speed and low-cost development of demonstrators for research, teaching, and innovation. Therefore, the purpose of this research is to capture the development of the software stack and identify the required target architecture for the platform. This is further used for discussing potential future challenges in demonstrating new and innovative Smart Production concepts.
Download

Paper Nr: 140
Title:

Dual Capability EAM for Agility in Business Capability Building: A Systems Theoretical View

Authors:

Jouko Poutanen and Mirja Pulkkinen

Abstract: Through two cases of IT-enabled business capability building in large enterprises, this paper elucidates how the systems theory approach can explain the enterprise architecture management (EAM) challenge to support business agility. The observation in both of these cases is that a legacy EAM approach does not adapt to a business development scenario involving agility. This leads to a study of the nature of the challenges in EAM when enabling strategic business moves involving new technologies, at the business unit level. For the type of projects as in these cases, we do not find a fitting paradigm in the EAM literature. Suggested solutions are IT bimodality, or Two Speed IT. However, its combination with EAM is scarce in earlier research. To be able to provide guiding ideas for the further development of a dual capability EAM approach, with an evident need, we develop a systems theoretical starting point to examine the cases. Complex Adaptive System (CAS) characteristics appear to give the necessary explanations to build on. Supported by this theoretical development, the study results in principles of a dual capability EAM, for agile strategic business capability building involving enterprise re-structuring.
Download

Paper Nr: 171
Title:

Boost the Potential of EA: Essential Practices

Authors:

Hong Guo, Jingyue Li, Shang Gao and Darja Smite

Abstract: Enterprise Architecture (EA) has been applied widely in industry as it brings important benefits to ease communication and improve business-IT alignment. However, various challenges were also reported due to the difficulty and complexity of applying it. Some empirical studies showed that EA stilled played a limited role in many organizations. In this research, we showed other findings where the potential of EA could be better used. They are derived from our analysis of advanced EA tool recommendations. Based on these findings, we proposed four essential EA practices and the rationales behind them in order to improve the understanding of current practices and bring insights for future studies to boost the potential of EA.
Download

Paper Nr: 172
Title:

Privacy by Design Enterprise Architecture Patterns

Authors:

Maria D. Coelho, André Vasconcelos and Pedro Sousa

Abstract: With the fast technological evolution and globalisation, the importance of data protection increases as the amount of data created and stored continues to grow at unprecedented rates. Organisations are encouraged to implement technical and organisational measures at the earliest stages of the design of the processing operations, in a way that ensures privacy and data protection principles right from the start. The General Data Protection Regulation (GDPR), whose aim is to ensure EU citizens’ rights and the respect for their personal data, addresses this topic by requiring that organisations put in place appropriate measures to implement the data protection principles effectively. Our proposal aims to use enterprise architecture patterns to integrate regulatory concerns, with special emphasis on the data subject’s rights. We also aim at ensuring that systems comply with the regulation from the beginning of their definition, in light of Privacy by Design principles.
Download

Paper Nr: 174
Title:

A Conceptual Reference Framework for Data-driven Supply Chain Collaboration

Authors:

Anna-Maria Nitsche, Christian-Andreas Schumann and Bogdan Franczyk

Abstract: This paper presents the preliminary results of the systematic empirically based development of a conceptual reference framework for data-driven supply chain collaboration based on the process model for empirically grounded reference modelling by Ahlemann and Gastl. The wider application of collaborative supply chain management is a requirement of increasingly competitive and global supply networks. Thus, the different aspects of supply chain collaboration, such as inter-organisational exchange of data and knowledge as well as sharing are considered to be essential factors for organisational growth. The paper attempts to fill the gap of a missing overview of this field by providing the initial results of the development of a comprehensive framework of data-driven supply chain collaboration. It contributes to the academic debate on collaborative enterprise architecture within collaborative supply chain management by providing a conceptualisation and categorisation of supply chain collaboration. Furthermore, this paper presents a valuable contribution to supply chain processes in organisations of all sectors by both providing a macro level perspective on the topic of collaborative supply chain management and by delivering a practical contribution in the form of an adaptable reference framework for application in the information technology sector.
Download

Paper Nr: 177
Title:

Complexity and Adaptive Enterprise Architecture

Authors:

Wissal Daoudi, Karim Doumi and Laila Kjiri

Abstract: In the current VUCA (Volatility, uncertainty, complexity and ambiguity) environment, enterprises are facing constant threats and opportunities due to internal and external factors. Those factors can impact various parts of the enterprise in the form of changes. Thus, Adaptive Enterprise Architecture (EA) is leveraged to assist the continuous adaptation to the evolving transformation. On the other hand, the complexity has been identified as one of the major challenges of the discipline of Enterprise Architecture. Moreover, one of the criteria of Adaptive EA is the ability to monitor and control the complexity of changes. Consequently, in this paper, we suggest a conceptualization of EA complexity measurement drilled down into factors and indicators. First, we begin with a brief summary of the criteria that we consider compulsory for Adaptive Enterprise Architecture and we give an overview of the model that we worked on in previous work. Then we investigate related work about complexity in a broader view. Finally, we describe our approach of assessment of complexity based on the proposed indicators.
Download

Paper Nr: 180
Title:

An IT Infrastructure for Small- and Medium-sized Enterprises Willing to Compete in the Global Market

Authors:

Francesco Pilotti, Gaetanino Paolone, Daniele Di Valerio, Martina Marinelli, Roberto Cocca and Paolino Di Felice

Abstract: Context: Small and medium-sized enterprises (SMEs) are the backbone of the economy of most countries. There is large evidence in the literature that digitalisation improves the market performance of enterprises and, as a consequence, it helps the growth of their businesses. Aims: The present position paper sketches the authors’ vision about an IT infrastructure for SMEs willing to compete in the global market. Method: A literature review is conducted on relevant topics concerning SMEs. In light of the published studies, two factors are essential for the survival of SMEs in the global market: (a) ally themselves with SMEs operating in the same market segment; (b) offer an amazing shopping experience to their customers. Results: The pillar of the proposal is the notion of Digital Network (DN), i.e., a network of collaborating SMEs physically distributed over a territory, which share the objective of selling goods and/or services to potential consumers through a digital platform. We envision the availability of a “generator” of DNs as the main pillar for helping SMEs. Each instance returned by the generator consists of two integrated portals: the SMEs Portal and the Customer portal. The present study provides preliminary findings that give substance to the soundness of the started project.
Download

Paper Nr: 181
Title:

Future ERP Systems: A Research Agenda

Authors:

Benedict Bender, Clementine Bertheau and Norbert Gronau

Abstract: This paper presents a research agenda on the current generation of ERP systems which was developed based on a literature review on current problems of ERP systems. The problems are presented following the ERP life cycle. In the next step, the identified problems are mapped on a reference architecture model of ERP systems that is an extension of the three-tier architecture model that is widely used in practice. The research agenda is structured according to the reference architecture model and addresses the problems identified regarding data, infrastructure, adaptation, processes, and user interface layer.
Download

Paper Nr: 211
Title:

Organizational Readiness Assessment for Open Source Software Adoption

Authors:

Lucía Méndez-Tapia and Juan P. Carvallo

Abstract: Open Source Software (OSS) is probably, the most iconic implementation of Open Innovation business paradigm, due its capacity to concentrate both technical benefits and business advantages. Over time, organizations face the OSS adoption challenge strengthening mainly its internal and technical elements. However, the rapid changes on business dynamics, and the comprehensiveness and fast development of open paradigms, show us that a new set of conditions must be satisfied to reach a successfully OSS adoption. These conditions, considered as a critical success factors, involve a wide range of resources, capacities and skills, both in internal and external scopes. Hence, although adopter organizations should be better prepared to face the challenges related to collaborative innovation, they do not have a systematic approach to value its readiness level to face the adoption challenges. In this context, the present research work proposes a model to assess the organizational readiness, considering the adopter as part of a live business ecosystem, where the relationships originated on co-development with developers’ communities, have mutual business impact at strategic, tactic, and operative level.
Download

Paper Nr: 22
Title:

Predictive Maintenance Model based on Asset Administration Shell

Authors:

Salvatore Cavalieri and Marco G. Salafia

Abstract: Maintenance is one of the most important aspects in industrial and production environment. The availability of huge amount of data coming from sensors and embedded systems, enabled the realisation of Predictive maintenance (PdM). It is an approach that aim to schedule maintenance tasks on the basis of historical data before the occurrence of failures, avoiding machine block downs and reducing the costs due to unnecessary maintenance actions. The adoption of vendor-specific solutions for predictive maintenance and the heterogeneity of technologies adopted in the brownfield for the condition monitoring of machinery reduce the flexibility and interoperability required by Industry 4.0. The paper presents a PdM model leveraging on the Asset Administration Shell (AAS) introduced in Reference Architecture Model for Industrie 4.0 (RAMI 4.0) as a means to enhance interoperability and enabling flexibility and re-configuration of the production against a PdM solution.
Download

Paper Nr: 62
Title:

Benefits of the Enterprise Data Governance in Industry: A Qualitative Research

Authors:

Rodrigo Prado, Edmir V. Prado, Alexandre Grotta and Andre M. Barata

Abstract: Data governance policies and procedures (DGPP) ensure proactive and efficient data management within the Enterprise context. Thus, DGPP corporate projects may result in positive impacts – we name benefits – to these companies. However, we found few studies reporting these benefits in-developing countries. Another gap is no evidence of a DGPP benefits model. Given this context, we first created a DGPP benefits model (DGB-M) via a Systematic Literature Review; we planned and conducted case studies at four different Brazilian industry sectors: agribusiness, fertilizers, automotive, and logistics. As main results, we have: (i) The DGB-M itself; (ii) evidence that 62% of the processes described by DGB-M were implemented by these four cases; (iii) evidence that 68% of the DGB-M benefits expected were achieved by these cases; and (iv) cases lessons learned. These results are highly relevant to forecast the benefits and challenges of future DGPP projects.
Download

Paper Nr: 86
Title:

Enterprise Architecture Patterns for GDPR Compliance

Authors:

Clara Teixeira, André Vasconcelos, Pedro Sousa and Mª J. Marques

Abstract: With the growth of technology and the personalization and customization of the internet experiences, personal data has been stored and processed more and more. In some cases, the data subject has not agreed with the retrieval and the purpose of the processing. To solve this, the European Union (EU) parliament approved the General Data Protection Regulation (GDPR), a regulation that has the data subjects’ interests in mind. Since some of the concepts and requirements are hard to comprehend, patterns can help system architects and engineers to deliver GDPR compliant information systems. It is important to emphasize that these privacy-related concerns should be addressed at a design level, not after the implementation. This methodology is mostly known as privacy by design. This work focuses on the requirements brought by the GDPR and in providing enterprise architecture patterns to achieve GDPR compliance by proposing a library of patterns. This library is organized in 11 use cases with the GDPR principles that they address; it has 22 patterns, each one handling one or more use cases, modeled in ArchiMate, for a clearer understanding of the solutions. The patterns are applied to a case study, and the impacts are assessed.
Download

Paper Nr: 198
Title:

InfoMINDS: An Interdisciplinary Framework for Leveraging Data Science upon Big Data in Surface Mining Industry

Authors:

Vitor A. Pinto and Fernando S. Parreiras

Abstract: Intending to be more and more data-driven, companies are leveraging data science upon big data initiatives. However, to reach a better cost-benefit, it is important for companies to understand all aspects involved in such initiatives. The main goal of this paper is to provide a framework that allows professionals from the mining industry to accurately describe data science upon big data. The following research question was addressed: ”Which essential components characterize an interdisciplinary framework for data science upon big data in mining industry?”. To answer this question, we will extend OntoDIVE ontology to create a framework capable of explaining aspects involved in such initiatives for the mining industry. As a result, this paper will present InfoMINDS - A Framework for Data Science upon Big Data Relating People, Processes and Technologies on Mining Industry. This paper will contribute to leveraging data science initiatives upon big data allowing application of OntoDIVE on real-case scenarios in mining industry.
Download

Paper Nr: 205
Title:

Crowd-Innovation: Crowdsourcing Platforms for Innovation

Authors:

Roberta Cuel

Abstract: Companies fostering innovation take advantage of an emergent combination of various factors such as the human brains, tools, networks, and technologies. Crowdsourcing platforms support all these elements together and offer quite an interesting tool for all the innovation phases, from idea creation to the market. Despite increasing utilization of these platforms, a systematic analysis of the supported type of services and contributions is missing. This work aims to analyze some of the most used crowdsourcing platforms and to classify them according to the type of contribution they may provide in the innovation process. Using an emerging approach analysis, the following contribution phases have been revealed: idea contests, ongoing idea platforms, platforms for idea screening, innovation platforms, R&D platforms, design contest platforms, ongoing design platforms, creative contests, and platforms for virtual concept testing. In this paper, these nine categories are described in depth to explain how they serve various phases of the innovation process: idea generation and testing; research and development of rough concepts, detailed concept and testing, production, and market launch.
Download