Home      Log In      Contacts      FAQs      INSTICC Portal

Previous Invited Speakers

The researchers below were distinguished invited speakers at previous ICEIS conferences.
We are indebted to them for their contribution to heighten the conference level.

2021 | 2020 | 2019 | 2018 | 2017 | 2016 | 2015 | 2014 | 2013 | 2012 | 2011 | 2010 | 2009 | 2008 | 2007 | 2006 | 2005 | 2004


From Representation to Mediation: Modeling Information Systems in a Digital World  
Jan Recker, University of Hamburg, Germany

Myths and Misconceptions about Machine Learning and How They Are Related to Software Engineering  
Stefan Kramer, Johannes Gutenberg - Universität Mainz, Germany

What Can We Learn from Play?    
Panos Markopoulos, Eindhoven University of Technology, Netherlands

How AI and Digital Are Key of Manufacturers Survival  
Eric Prevost, Oracle, United States

Impact of End User Human Aspects on Software Engineering  
John Grundy, Monash University, Australia


Software Similarities and Clones: A Curse or Blessing?    
Stanislaw Jarzabek, Bialystok University of Technology, Poland

Subjective Databases  
Alon Halevy, Facebook AI, United States

Digital Innovation and Transformation to Business Ecosystems    
Kecheng Liu, The University of Reading, United Kingdom

Information for Regional Innovation Systems  
Fred Phillips, University of New Mexico, United States


Taming Complexity with Self-managed Systems  
Danny Menasce, George Mason University, United States

Ontology-based Information Integration  
Marie-Christine Rousset, Université Grenoble-Alpes and Institut Universitaire de France, France

How Digital Twins Enable Model Driven Manufacturing  
Mike Papazoglou, Tilburg University, Netherlands

Digital Innovation and Future of Knowledge Management  
Manlio Del Giudice, University of Rome Link Campus, Italy


Decision Guidance Systems and Applications to Manufacturing, Power Grids, Supply Chain and IoT    
Alexander Brodsky, George Mason University, United States

Empirical Approach to Learning from Data (Streams)  
Plamen Angelov, Lancaster University, United Kingdom

Software Defined Cities  
Salvatore Distefano, Università degli Studi di Messina, Italy

The Future of Information Systems - Direct Execution of Enterprise Models, Almost Zero Programming    
David Aveiro, University of Madeira / Madeira-ITI, Portugal


The Enterprise Information Systems to Realize and Understand More About Our World  
Victor Chang, Teesside University, United Kingdom

High-level Verification and Validation of Software Supporting Business Processes  
Hermann Kaindl, TU Wien, Austria

Model Me If You Can - Challenges and Benefits of Individual and Mass Data Analysis for Enterprises  
Marco Brambilla, Politecnico di Milano, Italy

Five Challenges to the Web Information Systems Field  
Christoph Rosenkranz, University of Cologne, Germany


Big Data Analytics - Just More or Conceptually Different?  
Claudia Loebbecke, University of Cologne, Germany

The Sensing Enterprise - Enterprise Information Systems in the Internet of Things    
Sergio Gusmeroli, , Italy

Making Process Mining Green - Using Event Data in a Responsible Way    
Wil Van Der Aalst, Technische Universiteit Eindhoven, Netherlands

The Power of Text Mining - How to Leverage Naturally Occurring Text Data for Effective Enterprise Information Systems Design and Use  
Jan vom Brocke, University of Liechtenstein, Liechtenstein


Money-over-IP - From Bitcoin to Smart Contracts and M2M Money    
George Giaglis, Athens University of Economics and Business, Greece

Empowering the Knowledge Worker - End-User Software Engineering in Knowledge Management    
Witold Staniszkis, Rodan Development, Poland

Complexity in the Digital Age - How can IT Help, not Hurt
Martin Mocker, MIT, USA and Reutlingen University, Germany

Towards Data-driven Models of Human Behavior  
Nuria Oliver, , Spain


Semiotics in Visualisation    
Kecheng Liu, The University of Reading, United Kingdom

Why ERP Systems Will Keep Failing    
Jan Dietz, Delft University of Technology, Netherlands

Conceptual Modeling in Agile Information Systems Development    
Antoni Olivé, Universitat Politècnica de Catalunya, Spain

An Engineering Approach to Natural Enterprise Dynamics - From Top-down Purposeful Systemic Steering to Bottom-up Adaptive Guidance Control  
José Tribolet, INESC-ID / Instituto Superior Técnico, Portugal

Data Fraud Detection    
Hans-J. Lenz, Freie Universitat Berlin, Germany


Agile Model Driven Development    
Stephen Mellor, Freeter, United Kingdom

Semantic and Social (Intra)Webs    
Fabien Gandon, INRIA, France

Multi-Perspective Enterprise Modelling as a Foundation of Method Engineering and Self-Referential Enterprise Systems    
Ulrich Frank, University of Duisburg-Essen, Germany

Architecture-based Services Innovation    
Henderik A. Proper, Luxembourg Institute of Science and Technology, Luxembourg


Design by Units - A Novel Approach for Building Elastic Systems    
Schahram Dustdar, Vienna University of Technology, Austria

Hybrid Modeling    
Dimitris Karagiannis, University of Vienna, Austria

Managing Online Business Communities    
Steffen Staab, University of Koblenz-Landau, Germany

Requirements Engineering: Panacea or Predicament?    
Pericles Loucopoulos, Manchester University, United Kingdom

Trends in Blog Preservation    
Yannis Manolopoulos, Open University of Cyprus, Nicosia, Cyprus


Xuewei Li, Beijing Jiaotong University, China

Leszek Maciaszek, Wroclaw University of Economics and Business, Poland and Macquarie University, Sydney, Australia

Harold Krikke, Tilburg University, Netherlands

Kecheng Liu, The University of Reading, United Kingdom

Into the Cloud Enterprises
Yulin Zheng, UFIDA Software Co., Ltd, China

Shoubo Xu, Chinese Academy of Engineering / Beijing Jiaotong University, China

Yannis A. Phillis, Technical University of Crete, Greece

Modeling and Analysis Techniques for Cross-Organizational Workflow Systems: State of the Art
Lida Xu, Old Dominion University, United States


Runtong Zhang, Beijing Jiaotong University, China

Anind K. Dey, Carnegie Mellon University, United States

David Olson, , United States

Robert P. Duin, , Netherlands

Michel Chein, LIRMM, University of Montpellier 2, France


Masao J. Matsumoto, Solution Research Lab, Japan

Barbara Pernici, Politecnico di Milano, Italy

Michael Papazoglou, University of Tilburg, Netherlands

Jianchang Mao, , United States

Peter Géczy, AIST, Japan

Ernesto Damiani, , United Arab Emirates

Michele M. Missikoff, ISTC-CNR, Italy


Service Engineering for Future Business Value Networks  
Jorge Cardoso, Universidade de Coimbra, Portugal

Towards a Distributed Search Engine
Ricardo Baeza-Yates, VP of Yahoo!, Universidad de Chile, Chile

FROM STONE AGE TO INFORMATION AGE: (Software) Languages through the Ages
Jean-Marie Favre, Joseph Fourier University, LSR, France

The Link between Paper and Information Systems
Moira C. Norrie, , Switzerland


Semantics to Empower Services Science: Using Semantics at Middleware, Web Services and Business Levels
Amit Sheth, Wright State University, United States

Trends in Business Process Analysis: Using Process Mining to Find out What is Really Going on in Your Organization
Wil Van Der Aalst, Technische Universiteit Eindhoven, Netherlands

Service-Oriented Architecture: One Size fits Nobody  
Christoph Bussler, Google, Inc., United States

Information Logistics in Networked Organisations: Issues, Concepts And Applications  
Kurt Sandkuhl, School of Engineering, Jönköping University, Sweden

Driving Ahead: Joint Enterprise-Embedded Computing in Smart Clouds, Smart Dust and Intelligent Automobiles
Venkatesh Prasad, Ford Motor Company, United States

Introducing an IT Capability Maturity Framework
Martin Curley, , United States

Enterprise Information Systems for Use: From Business Processes to Human Activity
Larry Constantine, University of Madeira and Constantine & Lockwood Ltd., Portugal


Biometric Recognition: How Do I Know Who You Are?
Anil Jain, Michigan State University, United States

P2P Semantic Mediation of Web Sources  
Georges Gardarin, Prism Laboratory, France

Human Activity Recognition – A Grand Challenge  
J. K. Aggarwal, The University of Texas At Austin, United States

Attribute Cardinality Maps in Query Optimization: Their Theory and Implementation in Real-Life Database Systems
John Oommen, Carleton University, Canada

Reflexive Community Information Systems  
Matthias Jarke, RWTH Aachen University, Germany

Data Management in P2P Systems: Challenges and Research Issues  
Timos Sellis, IMIS, Greece


Engineering Web Applications - Challenges and Perspectives  
Daniel Schwabe, Catholic University of Rio de Janeiro, Brazil

Enterprise information systems implementation research: assessment and suggestions for the future  
Henri Barki, HEC Montréal, Canada

Enterprise Ontology  
Jan Dietz, Delft University of Technology, Netherlands

Information Technology, Organizational Change Management, and Successful Interorganizational Systems
M. Lynne Markus, Bentley University, United States

The Java Revolution : From Enterprise to Gaming  
Raghavan N. Srinivas, , United States

Changing the way the enterprise works: Operational Transformations  
Thomas Greene, , United States

Model Driven Architecture: Next Steps  
Richard Soley, Object Management Group, Inc., United States

Emotional Intelligence in Agents and Interactive Computers  
Rosalind W. Picard, M.I.T., United States


Engaging Stakeholders in the Definiton of Strategic Requirements
Pericles Loucopoulos, , United Kingdom

Organizational Patterns: Beyond Technology to People
Jim Coplien, Gertrud & Cope, Denmark

Managing Complexity of Enterprise Information Systems
Leszek Maciaszek, Wroclaw University of Economics and Business, Poland and Macquarie University, Sydney, Australia

Evolutionary Project Management: Multiple Performance, Quality and Cost Metrics for Early and Continuous Stakeholder Value Delivery -An agile approach  
Tom Gilb, , Norway

Large Scale Requirements Engineering: why it is different?
Kalle Lyytinen, University of Jyväskylä, Finland

Collaboration @ Work: 3rd Wave of Internet to Foster Collaboration between Individuals on the Seem
Isidro Laso, , Spain

Real-time Knowledge-based Systems for Enterprise Decision Support and Systems Analysis

Albert Cheng
University of Houston

This talk explores the use of real-time knowledge-based systems (RTKBSs) for enterprise decision support as well as for systems specification and analysis. Knowledge-based systems for monitoring and decision-making in a real-time environment must meet stringent response time and logical correctness requirements. Modern enterprise information systems are requiring shorter response time and greater reliability for businesses to stay competitive. The critical nature of such decision-making systems requires that they undergo rigorous and formal analysis prior to their deployment. This talk describes how knowledge-based systems for decision support are formally analyzed. It also shows how the requirements and operations of enterprise systems can actually be modeled as a rulebase which can then be formally analyzed.

Automatic Speech Recognition: a Review

Jean-Paul Haton
Université Henri Poincaré, Nancy 1

Automatic speech recognition (ASR) has been extensively studied during the past few decades. Most of present systems are based on statistical modeling, both at the acoustic and linguistic levels, not only for recognition, but also for understanding. Speech recognition in adverse conditions has recently received increased attention since noise resistance has become one of the major bottlenecks for practical use of speech recognizers. After briefly recalling the basic principles of statistical approaches to ASR (especially in a Bayesian framework), we present the types of solutions that have been proposed so far in order to obtain good performance in real life conditions.

Is Engineering Getting out of Classical System Engineering

Michel Léonard
University of Genè

An Enterprise Information System must be pertinent for the human and enterprise activities. It plays a central role because most of the activities are now supported by an information system (IS) and also because the Enterprise development process, itself, is depending on the IS development in most cases. Enterprise and IS developments are completely interwoven. It appears necessary to have concepts and rules, which are pertinent to explain difficulties and potentialities of IS thanks to information technologies. IS domain establishes a rendezvous between various competencies and responsibilities. The classical software engineering domain, which has not the same objectives, does not provide adequate concepts and rules to the IS engineering domain. With the point of view of this paper, IS engineering appears as a generalization of the software engineering domain.

Reasoning with Goals to Engineer Requirements

Colette Rolland
University of Paris 1

The concept of a goal has been used in multiple domains such as management sciences and strategic planning, artificial intelligence and human computer interaction. Recently goal driven approaches have been developed and tried out to support requirements engineering activities such as requirements elicitation, specification, validation, modification, structuring and negotiation. The paper reviews various research efforts undertaken in this line of research. It uses L'Ecritoire, an approach which supports requirements elicitation, structuring and documenting as a basis to introduce issues in using goals to engineer requirements and to present the state-of-the art.

Experimental software engineering: Role and impact of measurement models on empirical processes

Giovanni Cantone
University of Rome

The talk raisons on the impact that involved measurement models (MM) have on the complexity of an empirical evaluation process. We begin by considering experience from classical empirical solutions, which are based on counting models or variants of such a MM. In particular, experiences will be taken in consideration both from the evaluation of effectiveness and efficacy of software testing strategies vs. code reading, and from inspection of analysis and design documents vs. quality of requirement specifications. The talk continues by considering MM for goal-driven technology transfer (TT). Such models are much more complex than the above ones, in our experience. The talk emphasizes on metrics for workflow automation technology (WAT), reasons on the development of a MM for evaluating WAT, discusses on the role of such a MM in evaluating the adequacy of candidate technologies, defines an empirical process for eventually coming to make a choice, finally presents empirical data and discusses results.

Specific Relationship Types in Conceptual Modeling: The Cases of Generic and with Common Participants

Antoni Olivé
Polytechnic University of Catalonia

We review the role of the specific relationship types in conceptual modeling. We show that their practical interest lies in the fact that we can develop special mechanisms to ease their representation, special procedures for reasoning about them, and methods for their efficient implementation. The talk focuses on two specific relationship types. A generic relationship type is one that may have several realizations in a domain. Typical examples are IsPartOf, IsMemberOf or Materializes, but there are many others. The use of generic relationship types offers several important benefits. However, the achievement of these benefits requires an adequate representation method of the generic relationship types, and their realizations, in the conceptual schemas. In the talk, we propose two new alternative methods for this representation; we describe the contexts in which one or the other is more appropriate, and show their advantages over the current methods. We also explain the adaptation of the methods to the UML. A binary relationship type with common participants is one in which all instances of one participant are related to the same instance of the other participant. The concept applies also to n-ary relationship types. Most current conceptual schemas do not represent explicitly these relationship types. In the talk, we show the advantages of their explicit representation as derived relationship types. We also explain how to represent them in the UML.

Manufacturing Execution System for the Automotive Supply Chain

Oleg Gusikhin
Ford Scientific Research Lab

Traditionally production control at North American Automotive first-tier supplier plants (both internal and external) is based on the information from corporate material planning system. The process capabilities of these plants are significantly affected by uncertainties, such as random yield, production rate and downtime. Thousands of inter-related events and exceptions occur on the manufacturing floor every day: as a result, expediting has become the standard way of doing business, and executing the planned schedule has become the exception. Because of this it is difficult to provide adequate levels of customer service without high premium freight, inventory, and overtime. Competitive pressure to reduce the cost of operations is forcing a revision in the way production related information is managed. Specifically there is a need to arrive at a real-time decision support system in production execution. Such a system, referred as Manufacturing Execution System (MES), must integrate customer requirements from corporate MRP with plant floor automation data. This presentation is based on first-hand experience with MES implementation at Ford first-tier supplier plants. It will address the need for and describe the benefits of MESs, as well as their critical functions including (1) integration of production planning and distribution control systems with real-time plat-floor data (2) demand pull from distribution to upstream operations (3) value-chain coordination and event management (4) real-time visibility of the manufacturing and distribution operations.

Models Everywhere

Jean Bézivin
University of Nantes

We are presently witnessing an important paradigm shift occurring in the area of information system construction, namely from object and component technology to model technology. The object technology revolution has allowed the replacement of the more than twenty-years old stepwise procedural decomposition paradigm by the more fashionable object composition paradigm. Surprisingly this evolution seems itself to be triggering today another even more radical change, towards model transformation. To understand the extent and the real meaning of the recent move from object-based to modelbased architectures of information systems, it is very instructive to study the proposed new vision of the OMG (Object Management Group) called Model Driven Architecture (MDA) [2], [1]. The OMG has proposed a modeling language called UML (Unified Modeling Language) that is a great industrial success, but which applicability scope is not yet completely stabilized. In order to allow the definition of other similar languages as well, the OMG uses a general framework based on the MOF (Meta-Object Facility). Both UML and the MOF are basic building blocks of the new MDA architecture. In this transition from code-oriented to model-oriented software production techniques, a key role is now played by the concept of meta-model. The MOF has emerged from the recognition that UML was one possible meta-model in the information system landscape, but it was not the only one. Facing the danger of having a variety of different non-compatible meta-models emerging and independently evolving (data warehouse, workflow, software process, etc.), there was an urgent need for an integration framework for all meta-models in the software development scene. The answer was thus to provide a language for defining meta-models, i.e. a meta-meta-model together with a general framework for their design, verification, evolution and maintenance. In this context, the need for general model transformation tools clearly appears. One of the main targets of MDA is parametric generation from high-level models to variable middleware platforms (CORBA, DotNet, EJB, Web, etc.). Models are defined (constrained) by meta-models. A meta-model is an explicit specification of a set of concepts and relations between them. It is used as a consensual abstraction filter in a particular modeling activity. A meta-model defines a description language for a specific domain of interest (platform or business). For example UML describes the artifacts of an object-oriented software system. Some other meta-models may address other domains like process, organization, test, quality of service, etc. They correspond to highly specialized identified domains (platform or end-user) and their number may be very important. They are defined as separate components and many relationships exist between them. The long awaited silver bullet for separation of aspects could be finally in sight. Model engineering considers meta-models as first-class entities with low granularity and high abstraction. This emerging technology could be related and compared to knowledge engineering (ontologies), meta-data management, formal grammars and XML semistructured data engineering.

Resource Allocation in Large-scale Systems, a Decoupling Approach

Nuno Martins

In this talk, implementation and design issues are broached for a class of resource allocation problems. The idea of decoupling is used as the main tool to achieve great simplification and complexity reduction. Linear input flow constraints are also considered. One example of application illustrates the main ideas on the talk.

Information Systems and Knowledge Management

Giorgio De Michelis
University of Milano

This talk aims to discuss how Knowledge Management transforms our view on Information Systems and in particular on Management Information Systems (MISs). On the one hand, we will briefly recall the different architectures we can use to get knowledge on the performances of an organization from its Information System to support its managerial decision processes. Both ERP systems, allowing flexible data integration, and middleware components, allowing the creation of datawarehouses where the information created within the typical EDP procedures of an organization, is made avaialable to users for controlling, evaluating and simulating its processes and performances, provide effective means for building high quality MISs. On the other, we will introduce knowledge management systems, as those systems allowing to bring forth the knowledge context of any activity so that it is immediately usable by managers and professionals while acting and interacting. In my view, in fact, knowledge management deals not only with storing relevant explicit knowledge and making it selectively accessible by users (through searching engines, filtering and recommending systems) but also and mainly with making both tacit and explicit knowledge distributed within an organization directly in its collaboration and decision processes (De Michelis, 2001). The concept of "view with context" we have developed in the Klee&Co Esprit Project is a good example of what is for me knowledge management (De Michelis et al., 2000), showing how we can develop 'intelligent' systems combining effectively push and pull philosophies. Moreover, from the knowledge management viewpoint, managers and professionals of an organization, while collaborating and taking decisions together, need not only to access and share relevant knowledge on its internal processes and perfomances but also on the world where it operates: market, customers, suppliers, competitors, political, social and cultural context, etc. This requires that we move from the idea of developing a MIS to the idea of integrating the Information System components with a Knowledge Management System. The basis for developing the above integration can be found on the three faceted view of Information Systems I have developed with many colleagues within an Euro-Canadian project dedicated to Cooperative Information Systems (De Michelis et al., 1998, 1998b). If we design an Information System as a three-faceted system, where Systems, Organizational and Group collaboration facets characterize it, respectively, from the operational, mangerial and practical perspectives, then the Group collaboration facet will guide the integration of all the tacit and explicit knowledge needed by any decision maker in the organization within his/her work-space so that he/she can use it when it needs it, without any particular effort.

Software Agents and e-Business

Frank Dignum
University of Utrecht
The Netherlands

In recent years many people have advocated the use of software agents to solve problems in the design of complex systems for organisations. Although the number of workshops and conferences on agents has grown tremendously in these years there are hardly any operational applications of agent systems in industry. In this presentation we will discuss the actual potential of deploying agents in industrial applications. What are the main features of agents that make them attractive as a tool to develop systems especially in the context of an e-business environment? In order to be able to discuss these features we first have to look at the characteristics of ebusiness environments. Although this is not the place to discuss the current trends in depth one can easily agree on the following points: 1. Systems are hardly ever stand-alone anymore. The ERP systems for resource planning are connected to the order handling in order to achieve just in time planning, workflow management systems connect to ERP systems to optimize the distribution of work and resources throughout a process, etc. 2. What is probably even more important is that software systems of different companies are directly connected in order to enable e.g. electronic purchasing and inter-organisational workflow. 3. The environments in which the systems have to operate become more open and therefore the systems have to adapt more easily to changes in this environment. E.g. flight reservation systems that are coupled to the WWW. 4. Instead of having a limited number of fixed contacts with other companies, companies operate in networks in which they have short-term relations with different partners. This inhibits setting up expensive and inflexible connections like EDI. Some of the consequences are that the systems should be able to easily adjust to changing environments and they should be able to communicate to other systems on a "peer-to-peer" basis instead of "hierarchical" basis (as is the case for remote procedure calls or method invocations). Instead of developing monolithic systems this environment asks for distributed systems that can be flexibly connected. The standard features of software agents seem to fit the above requirements perfectly. Software agents are pro-active, reactive, autonomous and social. So, why are software agents not more widely used in industry? The main reason is that although agents should in practice adhere to all the features mentioned above, in practice many systems do not yet do so. Selecting the above features as the fundamental features of agents is one step, but it does not lead automatically to implemented systems with these properties. In this presentation I will argue that there are two main challenges in order to use agents succesfully in e-business environments. First, the fundamental theory underlying agent technology should be integrated and consolidated. Although some attempts are being made, they are far from complete yet.

Data => Information => Wealth: New Opportunities for Enterprise Information Systems

Thomas Greene

"If you cannot measure it you cannot manage it." The rapid expansion of the global information space means that opportunities for dramatic improvements in enterprise information can occur by rapid connection of the enterprise to newly emerging external information sources. The enterprise information systems must be viewed from a "work local -think Global" perspective. The information resources of the enterprise have long been recognized as a strategic resource. The increased pace of change in computer/network technologies and new connections to wider information spaces necessitates the use of models that can change as quickly as the environments in order to aid in planning. It has been asserted that national wealth is dependent on · The economic environment · The social environment and · Information exchange within the nation A further assertion is that a parameterization these environments, followed by measurements and averaging give a meaningful summary of the wealth of a nation. Notice that in this model information exchange is a component to characterize wealth. Let us examine an enterprise size model of this idea for information exchange. A top – down model of enterprise information subsystems and infrastructure components is essential to planning resource commitments. The connectedness of the enterprise information streams can provide opportunities for dramatic improvement in enterprise efficiency and effectiveness and potentially yield an increase in the wealth of the enterprise. Use the data to create information that can create new wealth.

Obtaining Precision when Integrating Information

Gio Wiederhold
University of Stanford

Precision is important when information is to be supplied for commerce and decision-making. However, a major problem in the web-enabled world is the flood and diversity of information. Through the web we can be faced with more alternatives than can be investigated in depth. The value system itself is changing, whereas traditionally information had value, it is now the attention of the purchaser that has value. New methods and tools are needed to search through the mass of potential information. Traditional information retrieval tools have focused on returning as much possible relevant information, in the process lowering the precision, since much irrelevant material is returned as well. However, for business ecommerce to be effective, one cannot present an excess of unlikely alternatives (type 2 errors). The two types of errors encountered, false positives and false negatives now differ in importance. In most business situations, a modest fraction of missed opportunities (type 1 errors) are acceptable. We will discuss the tradeoffs and present current and future tools to enhance precision in electronic information gathering.

Must Computer Systems Have Users?

Anatol Holt
University of Milano

We usually think "yes"; and we usually think that a "user" has to be a person .But we do not always think so; and sometimes we think: at least ONE part (of an interactive process) must be played by a person, while the other parts can be played by the computer itself. Three examples of this latter attitude easily come to mind, and are certainly enough for illustrative purposes: (a) computer games (such as cards or chess); computer software application packages can be written in which the computer is programmed to take one part, and the "user" of this program takes the other part; (b) learning/teaching: the computer takes the part of the teacher, while the student (who is a person) "uses" the program; (c) "Lisa" (a computerized "doctor" who treats mental patients -- originally written by J. Weizenbaum years ago at MIT); here it is "normal" to think: the patient is a person, while the analyst is the machine. However: in the non-computer version of these (interactive) processes, both parts must be persons. And of course we can ask: does it make sense -- for example in cases (a)-(c) above -- to let a computer play all parts? If this is admitted we are saying either: (d) the computer does not have a "user", or (e) the user of a computer can be another computer.But (e) is equivalent to saying: a computer can perform a function without a user; for a "user" must surely have non-computer characteristics -- whether restricted to persons or not -- furthermore, characteristics that are essential to actions performed "by computer" or "with the help of computers". I know of only one source for such characteristics, and it is a source which restricts the "user" to be a person, namely this. Every action -- whether a computer is involved or not -- requires that someone-or-something take responsibility for it. How do I know that this requirement restricts "users" to be persons? Because only persons can be rewarded or punished (by other persons); and only entities that can be rewarded and punished can carry responsibility. There is an issue which the above raises: what technical relationships to the running equipment can constitute "use"? Take Deeper Blue -- a computer system -- that is said to have played chess against Kasparov. Kasparov made moves, and Deeper Blue made moves in reply without any person (or group) taking any direct part in the game. But: if Kasparov had won, he could have taken a purse (or a trophy) for winning -- which Deeper Blue could not have! At most, such a purse (or trophy) could have been awarded to the team of programmers and engineers that fielded Deeper Blue against Kasparov. But: are we prepared to count this team as "users" (of the computer system), as Kasparov obviously was? (Clearly, not in the same sense.) Yet there is a sense-- easy enough to understand -- in which the facts as presented above mean: Kasparov did not play chess against Deeper Blue -- but against the programmers-and-engineers that confronted him via Deeper Blue. In this talk these issues --so fateful to our computerized future -- are explored. These issues intertwine with a type of question which is becoming progressively more important (in the era of Inernet and Groupware), namely: suppose the computer system mediates between different role players? Are all of these role players "users" -- (for their "uses" obviously depend on each other!)

E-commerce and its Real-Time Requirements (Modelling E-commerce as a Real-Time System)

Albert Cheng
University of Houston

E-commerce is a product of the Internet, often known as the fourth information revolution (following writing in Mesopotamia 5000 years ago, the written book in China 3300 years ago, and Gutenberg's printing in 1455). The keynote will address the e-commerce revolution and its requirements. Instead of the traditional bipartite business supply chains linking suppliers and customers, e-supply chains connect suppliers and consumers alike directly to an on-line market place using the Internet, forming a ``star'' Internet/Web-centric commerce system. This effectively shifts the balance of power from vendors to consumers, making the need for responsive and adaptive just-in-time business models more critical. The current state of e-commerce is fuelled by a massive number of developments in industry, government, and academia. Some of these efforts are complementary while others are divergent. This talk will discuss the technical requirements of e-commerce, such as integration, interoperability, scalability, reliability, accessibility, timeliness, and security. In particular, the presentation will show how e-commerce can often be modelled by a real-time system and hence solutions to these requirements can be more readily formulated. A real-time system must guarantee (1) the on-time delivery of results (on-time delivery of products and services to customers in e-commerce); (2) the secure delivery of these results (secure transactions in e-commerce); adaptive execution (feedback and updates in customer relations in e-commerce); and (3) fault-tolerant execution (reliable and accessible 24-hour e-commerce).

Making the most of your Knowledge

Ian Ritchie
British Computer Society

In the past, organisations which have prospered have been the ones which have made the optimal use of plant and equipment, or of their sales and marketing muscle. In recent decades the financial strength and flexibility of companies have become more significant. Over the next few decades, these factors will become subservient to another powerful strength, exploited by the most successful organisations. That strength will be the knowledge which they possess. In order to remain truly competitive, organisations must now begin to marshal the skills and experience - the knowledge - which is held in the heads of their employees, and to maximise the benefits of identifying, sharing, and making key knowledge explicit. Ian Ritchie's presentation will look at developments in 'knowledge management' and what it means for future effectiveness.

Agents - The Future of Intelligent Communications

Robert Ghanea-Hercock
Future Tech. Group BT Labs

The future of Telecommunication systems will be dominated by computer enhanced mobile communication devices i.e. Communicators, capable of delivering voice, multimedia, and computing functionality to the mobile user. Such devices will become the primary interface to telecommunications and Internet networks within 2 years, outnumbering the traditional desktop computer interface. This paper discusses why mobile software agents are an ideal technology to support and enhance the operation of such devices; for example by providing transparent network management and support for mobile E-commerce. The paper also reviews the commercial value agent based technologies can facilitate.

A New World for the Enterprise -- E-Commerce and Open Sources

Thomas Greene

The people gathered at this conference are working on very hard problems, but some of the assumptions upon which solutions to these problems are based have radically changed over the past few years, raising questions of who we are designing for and what tools we are using. Because enabling information technologies are now connecting corporate decision makers directly to information sources at the operations level, traditional pyramid structures are flatter. The internal enterprise information structures have smaller middle management. The change in the external pyramid where consumers were at the bottom of the pyramid has also changed. Now the consumer express opinions and make purchases with a mouse click, not at the end of a direct marketing campaign. For many systems, the success criteria is becoming how many clicks can an enterprise information system generate. In this hour we will examine several issues arising from the new "consumer rules" E-commerce imperative. Second open source software and vendor standards means that software customers are able to quickly change directions in their system development. For example many information systems that would have been closed by use of a proprietary web server are still kept open to change because they use an open source server as a system foundation. How can Apache be the most widely used server and Linux be a widely used operating system when the basic code is not wrapped up and maintained by a group of companies or a single company? Can global groups of volunteers give away ideas and build systems that out perform thousands of well paid professionals in traditional corporations? What do the facts of the E-Commerce imperatives and the widespread use of open source programs mean for those of us who must create new systems of information to serve our enterprises?

Information Requirements are Human, Computers only Machines: MEASUR's Rigorous Methods Based on Social Norms

Ronald Stamper
University of Twente

Surely we need a revolution! By far the greater part of IS costs are non-technical and most IS failures stem from poor understanding of organisational requirements and preparation for change. After a revolution through 180 degrees we might stand facing the organisation and realise that the computer can do no more than efficiently manipulate meaningless syntactic structures. The most efficient circulation and processing of information are valueless unless it conveys meanings and express intentions that can influence the responsible people. MEASUR uses these italicised concepts to find the human information requirements in a precise form, and even deliver a default version of the technical system. First, Problem Articulation Methods support teams of users to devise solutions, prepare the organisational changes and plan the project. The computer will fit into a part of the system, which is then subjected to detailed Semantic Analysis. From the resulting Semantic Normal Form (SNF) a default implementation can be generated. The practical benefits of MEASUR are many. Users can understand, criticise and improve the specifications that are 20th of the normal volume. This reduces the risk of not meeting organisational needs – the reason why 60% of systems fail. Development, support and maintenance costs are slashed by a factor of ten. The components of the design are reusable and the system can be implemented by a Just-in-Time strategy because of the stability of the specification. The design of the organisation has no technical elements so it provides an ideal basis for an outsourcing contract. The methods can even be applied to drafting the contract because they are based on an understanding of social and legal norms. Underlying MEASUR is the idea of Information Fields. Each of these is group of people operating according to the same norms. An organisation consists of many overlapping Information Fields. The norms govern the information requirements: the meanings, intentions, responsibilities and influence exerted. By comparison, the technical matters are mere details, although we spend all our time on them. The lecture will illustrate the methods.

Agent-Oriented Information System Development

John Mylopoulos
University of Toronto

Traditionally, software development techniques for information systems have been process- and data-driven in the sense that the fundamental concepts used to define and analyse software during requirements analysis and design have been those of "process" and "data". This observation applies equally well to structured as well as object-oriented analysis and design techniques. We speculate on what a software development methodology might look like if it was founded on the notions of "actor" (who can be an agent, a position, or a role) and "goal". For our study we adopt Eric Yu's i* modelling framework and show how one can model and analyse early and late requirements, architectural and detailed design. The proposed methodology fits well current work on agent-oriented programming frameworks.

"Industry Strength" – Its true meaning for high-tech SMEs.

Colin J. Theaker
University of Staffordshire

This presentation covers the very critical issue of how advanced technologies can be taken out of the academic domain and adopted by modern IT industries. Around 90% of European companies fall within the category of being a Small to Medium sized Enterprise (SME). These have very different constraints with respect to the adoption of new technologies than do the large multi-national companies with a large R&D budget. In particular, the impact of making the wrong choice of technologies is more far reaching. There is a critical balance between choosing technologies at the leading edge that may yield competitive advantage, or backing technologies that fail to fulfil their promise and lead to significant financial loss, which consequentially may result in the downfall of the company. The concept of "Industry Strength" is therefore of particular strategic importance when applied to technologies, techniques and tools to be adopted by an SME. This case study looks at the way a high technology company, involved in real time, safety critical and mission critical systems has approached the adoption of new technologies. The paper covers the choice of paradigms, the methods adopted and the tools to support the methods. The technologies include object orientation; as embodied through UML as a design method; JAVA as a language for systems development; and the tools to support systems implementation. Configuration management systems are included for overall version control and project management. A metrics programme has also been introduced within the Company to measure the impact of these technologies within a real-life environment. This paper presents a systematic and ongoing analysis of the impact of new technologies within a small enterprise seeking to maintain its position at the leading edge of technology.

Database Technology and its Applications

Peter Apers
University of Twente
The Netherlands

More and more application fields require database technology. These new application areas form a challenge for the database community, because their requirements are different from the traditional administrative applications. New application areas are geographical applications, multimedia applications, WWW, workflow management. Just to name a few. They pose new requirements such as new data types, complex operations, more advanced transaction models. This presentation concentrates on query optimization and transaction processing: A complex object model is presented that allows for the definition of new data types and complex operations on them in such a way that it is still possible to efficiently process queries. This model is currently being tested in the context of GIS, multimedia information retrieval, and semistructured data. An advanced transaction model is presented that allows for the support of business transactions in workflow management. This model has been implemented in an industrial strength prototype.

Internet and Intranet Computing

Thomas Greene

We will examine the phenomena of the Internet and Intranet and the explosion of issues caused by the World Wide Web. Both the Internet and the Web are interrelated technological "accidents". These historical accidents occurred because the information gatekeepers of Government, industry, commerce other institutions were not aware of the information globalization phenomena that was occurring, until it was already in place. Stress in many institutions is occurring because of the rapid changes and re definitions the Internet - Web cause. For example national infrastructure components such as phone systems and newspapers are changing, the way in which commerce is conducted is changing, and even the way traditional governments operate is changing. In the near future a series of technology and regulatory changes will also occur that will enhance and but may limit the Internet and the already limited access Intranets. Let us identify issues of fundamental importance to openness and continued globalization of information flow, especially at the gateways of information flow. Some issues of commercial standards and government regulation should be monitored as we navigate through this complex world of technology change. Failure to carefully monitor change in these issues could mean that the Internet would become a collection of Intranets and globalization of commerce and information flow would be slowed down or perhaps cease to exist.

A Class of Distributed Decision Problems

Michael Athans

We discuss a class of truly distributed optimal decision problems that arise in hypothesis testing and binary detection problems. We investigate optimal team-theoretic decision rules under the assumption that several decision agents within an organisation make independent uncertain measurements of the same event, coordinate using constrained communication protocols, and arrive at a team-optimal decision. These problems are much more complex than their centralized counterparts and demonstrate subtle issues of decision making in decentralized settings. We examine the improvement in the quality of the decision as we allow increased communication among the decision agents. In addition, we discuss issues of different decision-oriented organizational architectures, e.g. flat vs. hierarchical, and the interplay between organizational topologies and restricted communication protocols for coordination. The results illustrate why it is difficult to structure superior (expanding or contracting) organizations.

Status & Challenges for Multi-Agent Coordination Technology

Mark Klein

Objectives: Complex computing systems are increasingly being implemented as distributed collections of cooperating human and computational agents because of the large scale and inherently distributed nature of the problems now being addressed. The advantages of this distributed approach are many: they include simplified knowledge acquisition, increased performance, increased tolerance for failure of system components, reduced "brittleness" (i.e. the ability to handle a wider range of problems) and so on. This evolution has raised important new issues, however, concerning how such multi-agent systems can be coordinated productively. The objective of this tutorial is to critically review the current state of the art of computer-based multi-agent coordination support technology, and help participants better understand the key challenges and emerging solutions that are being identified by artificial intelligence and related fields. The tutorial will examine, in particular, the three main classes of coordination technology: process management, dependency capture and exception management. We will review the problems they are designed for, their functionality, areas of applicability, strengths and weaknesses. The key future challenges and emerging solutions for these technologies will be identified from the perspective of basic research undertaken in distributed artificial intelligence and related fields. Structure: The tutorial takes the form of roughly 150 PowerPoint slides (from which I select a subset to present based on the background of the audience) plus 15 minutes of video segments. All slides include substantive notes that summarize the content of that slide (useful for people who are not native English speakers) and provide literature and Web citations to relevant work. Section 1: Why Do We Need Coordination Technology? This section discusses how increasing coordination complexity in large-scale collaborative work is overwhelming current (mainly manual) approaches, leading to severe impacts on cost, quality and schedule. Section 2: The Types of Coordination Technology: This section discusses the key insight that the need for coordination is created by the existence of dependencies between distributed activities, and identifies the three kinds of coordination technology (process, exception and dependency management) that have emerged to address these challenges. Section 3: Process Management: This section gives an overview, with numerous specific examples, of the space of computational coordination mechanisms, ranging from pre-defined process models to emergent behavior between "socially aware" deliberative agents, and discusses key lessons as well as directions for future work. Section 4: Exception Management: This section describes technology aimed at enhancing the ability of multi-agent systems to respond effectively to dynamism (errors, resource changes, requirements changes etc) in agents systems and the environment they operate in. Key lessons and directions for future work are described. Section 5: Dependency Capture: This section discusses technology for dependency capture, which is a key infrastructure for coordination mechanisms whose job is to attempt to optimize multi-agent system performance in the context of these inter-task dependencies. Key lessons and directions for future work are described. Section 6: Future Challenges: This section discusses overall challenges for future development in multi-agent coordination technologies, including integrating these technologies and developing more "socialized" agent infrastructures.

Overview of Enterprise Management Using SAP R/3

Satya Chattopadhyay
University of Scranton

A brief introduction of E-Ware (Enterprise Ware) software system benefits will be followed by a concise description of the SAP R/3 system. R/3 functionality through the primary interlocking application modules: Sales and Distribution (SD), Production Planning (PP), material management (MM), Financial Accounting (FI), Controlling (CO), and Asset Management (AM) will be introduced. The role of various master data tables in supporting the spectrum of business processes and transactions in a seamless fashion will be shown using a business process walkthrough. Managerial and performance criteria of assessment of E-Ware systems will be discussed.

Analysis and Verification of Real-time Systems

Albert Cheng
University of Houston

The correctness of many systems and devices in our modern society depends not only on the effects or results they produce, but also on the time at which these results are produced. These real-time systems range from the anti-lock braking controller in automobiles to the vital-sign monitor in hospital intensive-care units. For example, when the driver of a car applies the brake, the anti-lock braking controller analyzes the environment in which the controller is embedded (car speed, road surface, direction of travel) and activates the brake with the appropriate frequency within fractions of a second. Both the result (brake activation) and the time at which the result is produced are important in ensuring the safety of the car, its driver and passengers. Recently, computer hardware and software are increasingly embedded in a majority of these real-time systems to monitor and control their operations. These computer systems are called embedded systems, real-time computer systems, or simply real-time systems. Unlike conventional, non-real-time computer systems, real-time computer systems are closely coupled with the environment being monitored and controlled. Examples of these real-time systems include computerized versions of the braking controller and the vital-sign monitor, the new generation of airplane and spacecraft avionics, the planned Space Station control software, high-performance network and telephone switching systems, multimedia tools, virtual reality systems, robotic controllers, and many safety-critical industrial applications. These embedded systems must satisfy stringent timing and reliability constraints in addition to functional correctness requirements. There are two ways ensure system safety and reliability. One way is to employ engineering (both software and hardware) techniques such as structured programming principles to minimize implementation errors and then utilize testing techniques to uncover errors in the implementation. The other way is to use formal analysis and verification techniques to ensure that the implemented system satisfied the required safety constraints under all conditions given a set of assumptions. In a real-time system, not only do we need to satisfy stringent timing requirements, but also be able to guard against an imperfect execution environment which may violate pre-runtime design assumptions. The first approach can only increase the confidence level we have on the correctness of the system because testing cannot guarantee that the system is error-free. The second approach can guarantee that a verified system always satisfies the checked safety properties. However, state-of-the-art techniques, which have been demonstrated in pedagogic systems, are often difficult to understand and to apply to realistic systems. Furthermore, it is often difficult to determine how practical a proposed technique is from the large number of mathematical notations used. The objective of this tutorial is to provide an introduction to formal techniques and tools that are practical for actual use. These theoretical foundations are followed by practical examples in employing these advanced techniques to build, analyze, and verify different modules of real-time systems. Then the tutorial evaluates and assesses the practicality of the available techniques and tools for building the next generation of real-time systems. Topics covered include the analysis and verification techniques/tools based on schedulability analysis, model checking, Statechart/Statemate, Modechart, timed automata, timed Petri nets, process algebra, real-time temporal logic, and semantic rule-based analysis.

The Information-field Paradigm and New Directions for Systems Analysis and Design

Ronald Stamper
University of Twente
The Netherlands

Information systems analysis and design is stagnant. It exists largely as an adjunct to software engineering to facilitate the application of computers. Its fundamental ideas, which have scarcely advanced since the 1950s, are based a paradigm that makes us think with a technical bias. This paper proposes a new perspective that can make ISAD more productive, both practically and intellectually. We are used to treating an information system as the flow and manipulation of data to produce more data, which we re-label "information". This old "flow" paradigm fails to acknowledge that computers and data are not ends in themselves but only the means to achieve ends that are essentially social. Data have no value until they change how people think and how they are disposed to act. Starting from this different point, we arrive at an "information field" paradigm. An information field is established by a group of people who share a set of norms. Norms are the units of knowledge that enable us to co-operate in an organised way. They regulate our behaviour, beliefs, values and perceptions, and they all have the form IF condition THEN subject HAS attitude TO proposition Information then serves the norm subject who needs to know when the condition is met. When this happens, for example, in the case of a behavioral norm, to be obliged, permitted or forbidden to act as the proposition specifies. For example: IF the goods are faulty THEN the vendor HAS an obligation TO replace them As a result of the norm being activated in this example, the vendor will tend to replace the goods or offer to do so. Either output produces more information that enters the social system. The information needed by the group in the information field is defined by the set of norms they share. The requirements for any computer-based system to serve these people are simply a logical consequence of the formally defined field. This field paradigm leads to a theory of information systems as social systems in which IT can play its limited role. It transforms our discipline from an aid to computer application into a formal and precise study of organised social behaviour with wide intellectual and practical implications. Our discipline will be able to underpin all kind of organisational re-engineering, with or without the use of IT. It has the potential to change the present broad-brush study of organisation into a precise science. In practice, with computer applications, we have shown that the field paradigm can lead to massive reductions in development, support and maintenance costs, increased system stability, greater reusability of design elements and far less documentation that is also easier to understand. The lesson will illustrate the basic ideas of this theory and the methods of analysis (MEASUR) that it has generated.

The Challenge of Making Enterprises Virtual: An AI Perspective

Mark Fox
University of Toronto

No Information Available