PRIMORIS      Contacts      FAQs      INSTICC Portal


The role of the tutorials is to provide a platform for a more intensive scientific exchange amongst researchers interested in a particular topic and as a meeting point for the community. Tutorials complement the depth-oriented technical sessions by providing participants with broad overviews of emerging fields. A tutorial can be scheduled for 1.5 or 3 hours.

Tutorial proposals are accepted until:

January 30, 2018

If you wish to propose a new Tutorial please kindly fill out and submit this Expression of Interest form.


How to Combine Requirements and Interaction Design 
Lecturer(s): Hermann Kaindl

Data Science using the Shell 
Lecturer(s): Andreas Schmidt and Steffen G. Scholz

How to Combine Requirements and Interaction Design


Hermann Kaindl
Vienna University of Technology
Brief Bio
Hermann Kaindl joined the Vienna University of Technology in early 2003 as a full professor. In the same year, he was elected as a member of the University Senate. Prior to moving to academia, he was a senior consultant with the division of program and systems engineering at Siemens AG Austria. There he has gained more than 24 years of industrial experience in software development. His current research interests include software and systems engineering focusing on requirements engineering and architecting, and human-computer interaction as it relates to interaction design and automated generation of user interfaces. He has published 5 books and more than 220 refereed papers in journals, books and conference proceedings. He is a Senior Member of the IEEE, a Distinguished Scientist member of the ACM and a member of the AAAI, and is on the executive board of the Austrian Society for Artificial Intelligence.

When the requirements and the interaction design for the user interface of a system are separated, they will most likely not fit together, and the resulting system will be less than optimal. Even if all the real needs are covered in the requirements and also implemented, errors may be induced by human-computer interaction through a bad interaction design and its resulting user interface. Such a system may even not be used at all. Alternatively, a great user interface of a system with features that are not required will not be very useful as well. This tutorial explains joint modeling of (communicative) interaction design and requirements (scenarios and use cases), through discourse models and domain-of-discourse models as developed by this proposer and his team. (They will also be briefly contrasted with the task-based modeling approach CTT.) While these models were originally devised for capturing interaction design only, it turned out that they can be also viewed as precisely and comprehensively specifying classes of scenarios, i.e., use cases. In this sense, they can also be utilized for specifying requirements. User interfaces for these software systems can be generated semi-automatically from our discourse models, domain-of-discourse models and specifications of the requirements. This is especially useful when user interfaces tailored for different devices are needed. So, interaction design facilitates requirements engineering to make applications both more useful and usable.


Requirements, interaction design, user interfaces, scenarios, use cases

Target Audience

This tutorial is targeted towards people who are supposed to work on the requirements or the interaction design, e.g., requirements engineers, interaction designers, user interface developers, or project managers. It will be of interest for teachers and students as well.
The value for the attendees is primarily improved understanding of a potential separation of requirements engineering and interaction design, and how it can be overcome by combining them to make business applications both more useful and usable.
According to previous experience, this tutorial can be given successfully for 5 to 20 people attending.

Detailed Outline

1. Introduction 5min
1.1 Brief introduction of the tutor
1.2 Brief introduction of the participants
1.3 Motivation and overview
2. Background 15min
2.1 Requirements
2.2 Object-oriented modeling features and their UML representation
2.3 Scenarios / Use Cases
2.4 Interaction design
2.5 Ontologies
2.6 Speech acts
3. Functions / tasks, goals and scenarios / use cases 30min
3.1 Relation between scenarios and functions / tasks
3.2 Relation between goals and scenarios
3.3 Composition of these relations
3.4 A systematic design process based on these relations
3.5 Exercise
4. Requirements and object-oriented models 25min
4.1 Metamodel in UML
4.2 Requirements and objects
4.3 Exercise
5. Interaction design based on scenarios and discourse modeling 35min
5.1 Interaction Tasks derived from scenarios
5.2 Communicative Acts
5.3 Adjacency Pair
5.4 Rhetorical Structure Theory (RST) relations
5.5 Procedural constructs
5.6 Conceptual Discourse Metamodel
5.7 Duality with Task-based modeling
6. Use case specification 20min
6.1 Use case diagram
6.2 Use case report (RUP)
6.3 Sketch of flow of events through scenarios
6.4 Business process — Business Use Case
6.5 Specification based on discourse modeling
7. Exercises 25min
7.1 Try to understand the model sketch of a discourse
7.2 Try to model a discourse yourself
8. Sketch of automated user-interface generation 20min
8.1 Process of user-interface generation
8.2 Examples of generated user interfaces
8.3 Unified Communication Platform
9. Summary and conclusion 5min

Secretariat Contacts

Data Science using the Shell


Andreas Schmidt
Karlsruhe Institute of Technology
Brief Bio
Prof. Dr. Andreas Schmidt is a professor at the Department of Computer Science and Business Information Systems of the Karlsruhe University of Applied Sciences (Germany). He is lecturing in the fields of database information systems, data analytics and model-driven software development. Additionally, he is a senior research fellow in computer science at the Institute for Applied Computer Science of the Karlsruhe Institute of Technology (KIT). His research focuses on database technology, knowledge extraction from unstructured data/text, Big Data, and generative programming. Andreas Schmidt was awarded his diploma in computer science by the University of Karlsruhe in 1995 and his PhD in mechanical engineering in 2000. Dr. Schmidt has numerous publications in the field of database technology and information extraction. He regularly gives tutorials on international conferences in the field of Big Data related topics and model driven software development. Prof. Schmidt followed sabbatical invitations from renowned institutions like the Systems-Group at ETH-Zurich in Switzerland and the Database Group at the Max-Planck-Institute for Informatics in Saarbrucken/Germany.
Steffen G. Scholz
Karlsruhe Institute of Technology
Brief Bio
Dipl.-Ing Dr. Steffen G. Scholz has more than 15 years of R&D experience in the field of polymer micro & nano replication with a special focus on injection moulding and relevant tool-making technologies. He is an expert in process optimization and algorithm design and development for micro replication processes. He studied mechanical engineering with special focus on plastic processing and micro injection moulding and obtained his degree as from the University of Aachen (RWTH). He obtained his PhD from Cardiff University in the field of process monitoring and optimization in micro injection moulding and led a team in micro tool making and micro replication at Cardiff University. Dr. Scholz joined KIT in 2012, where he is now leading the group for process optimization, information management and applications (PIA).

For data analysis, typically we load the data into a dedicated tool, like a relational database, the statistic program R, mathematica, or some other specialized tools to perform our analysis.
But often, there is also another option, which can be performed on nearly every computer, having the necessary amount of storage available. Many shells, like bash, csh, … provide a bunch of powerful tools to manipulate and transform data and also to perform some sort of analysis like aggregation, etc. Beside the free availability, these tools have the advantage that they can be used immediately, without transforming and loading the data into the target system before, and also, that they typically are stream based and so, huge amounts of data can be processed, without running out of main-memory. With the additional use of gnuplot, ambitious graphic plots can easily be generated.
The aim of this tutorial is to present the most useful tools like cat, grep, tr, sed, awk, comm, uniq, join, split, bzip2, bzcat, bzgrep, etc., and give an introduction on how they can be used together. So, for example, a wide number of queries which typically will be formulated with SQL, can also be performed using the tools mentioned before, as it will be shown in the tutorial.
The tutorial will also include hands-on parts, in which the participants do a number of practical data-analysis, transformation and visualization tasks.

Background Knowledge

Participants should be familiar using a shell like bash, sh, csh, DOS shell, …

Duration (3 hours)

  • Introduction 15 min.
  • Commands/tools for structured data 45 min.
  • Hands-on Part I 30 min.
  • ommands/tools for unstructured data 30 min.
  • Visualization 30 min.
  • Hands-on Part I 30 min.
Software Requirements for the hands-on parts:

  • Unix and Mac users: none, the needed tools are already part of your distribution
  • Windows users: Please install cygwin on your computer ( gnuplot must be additional selected during the cygwin installation process.

Secretariat Contacts