Our Seminars in 2014

Practicing Social Practices

Abstract: In situations where agents have to function in an environment with people social aspects become very important. In the area of Intelligent Virtual Agents people have been working on adding social aspects to agents for many different purposes. However, the agents become unmanageable from a software engineering view with the addition of every aspect. We argue for a more fundamental approach based on social elements. In order to get a grip on the complexity that comes with the introduction of social aspects we propose the use of social practices. In this presentation I will discuss the nature of social practice theory and its applications to social agents. I will briefly show how they can actually lead to efficiency gains with respect to goal oriented approaches.

BIO: Frank Dignum is a leading researcher in the field of social aspects of multi-agent systems. He has contributed in the fields of agent communication, normative agent systems, agents for electronic commerce and agents for social simulation and serious gaming over the past two decades. He has a particular interest in bridging the gap between developing theoretical frameworks and practical tools. He has accumulated around 8.5M euro in research projects both nationally as well as EU funded. He is associate editor of the Journal of Autonomous Agents and Multi Agents Systems, and been co organizer of the Autonomous Agents and Multi Agent Systems conference and has been general chair, program chair and co-organizer of numerous workshops and conferences, including the International Conference on Practical Aspects of Agents and Multi Agent Systems, the PRIMA conference and the International Conference on Electronic Conference.

Human-Agent Interaction

Abstract: The potential of artificial intelligent systems to interact and collaborate not only with each other but also with human users is no longer science fiction. Healthcare robots, intelligent vehicles, virtual coaches, serious games are currently being developed that exhibit social behaviour – to facilitate social interactions, to enhance decision making, to improve learning and skill training, to facilitate negotiations and to generate insights about a domain. The ability to exhibit social behaviour is paramount for agents to be able to engage in meaningful interaction with people. This requires not only agent models that start from and integrate different socio-cognitive elements such as emotions, social norms, or personalities, but also organisation models that structure and regulate the interaction between people and agents. In this talk, I will present our current work on organisational models. In particular, I present a model for reasoning about organisational structures, a model to regulate resource request and sharing by using conditional norms and use policies. I conclude with some new directions of work on agent deliberation architectures

The post-theoretic enterprise

A critical review on paper “Memetic Algorithms for Mining Change Logs in Process Choreographies”

Reference: Fdhila, Walid, Stefanie Rinderle-Ma, and Conrad Indiono. “Memetic Algorithms for Mining Change Logs in Process Choreographies.” In Service-Oriented Computing, pp. 47-62. Springer Berlin Heidelberg, 2014.

A critical review of “An Event Processing Platform for Business Process Management”

Presenter(s): Metta Santiputri
Time: 4pm onwards
Date: Thursday 30th, October, 2014
Venue: 6.105 – Smart Building

Reference: A critical review of Herzberg, N.; Meyer, A.; Weske, M., “An Event Processing Platform for Business Process Management,” Enterprise Distributed Object Computing Conference (EDOC), 2013 17th IEEE International , vol., no., pp.107,116, 9-13 Sept. 2013

Revising beliefs towards the truth

Dr. Simon D’Alfonse University of Melbourne
Date: Tuesday 21st, October, 2014
Time: 4pm onwards
Venue: 6.105 – Smart Building

Abstract: Traditionally the field of belief revision has been mainly concerned with the relations between sentences (pieces of data) and the logical coherence of revision operations without as much concern for whether the dataset resulting from a belief revision operation has epistemically valuable properties such as truth and relevance. Gardenfors for example, who developed the predominant AGM framework for belief revision, argues that the concepts of truth and falsity become irrelevant for the analysis of belief change as “many epistemological problems can be attacked without using the notions of truth and falsity”. However this may be, given that agents process incoming data with the goal of using it for successful action, this lacuna between belief revision and epistemic utilities such as truth and relevance merits attention.

Data-Driven Risk Prediction for Software Projects

Morakot Choetkiertikul University of Wollongong
Date: Thursday 16th, October, 2014
Time: 4pm onwards
Venue: 6.105 – Smart Building

Abstract: Identifying, assessing, and analyzing risks relevant to a software project and planning measures to deal with them are critical to the success of the project. Current practices in risk management mostly rely on high-level, generic guidance or subjective judgements of experts. Therefore, traditional risk management process required extra activities besides development tasks, and possibly leading to extra costs. Since, the collaboration of developer teams and the development practices have been supported and promoted by issue tracking systems such as JIRA, the growing ubiquity of software development data has exposed significant opportunities for developing a new generation of data-driven risk prediction approach. To address the drawbacks of traditional risk management, we propose a novel approach to risk prediction using historical data of software projects. Specifically, our approach analyzes pattern of risks that already occurred in the past, and uses those patterns to identify and assess risks in the current situation of the project. Our approach aims to provide an actionable and insight information about the existence of specific risks in the current state of the projects by identifying potentially risky tasks which could be a cause of loss in software projects.

The post-theoretic enterprise

A critical review of “Automatic discovery of data-centric and artifact-centric processes”

Ayu Saraswati, Decision Systems Lab, UOW
Date: Thursday, September 18, 2014
Time: 4pm onwards
Venue: 6.105 – Smart Building

Reference: A critical review of Nooijen, Erik HJ, Boudewijn F. van Dongen, and Dirk Fahland. “Automatic discovery of data-centric and artifact-centric processes.” In Business Process Management Workshops, pp. 316-327. Springer Berlin Heidelberg, 2013

“Cool” computer science: Why NP-completeness matters

Professor Aditya Ghose Decision System Lab, UOW
Date: Thursday 11th, September, 2014
Time: 4pm onwards
Venue: 6.105 – Smart Building

ABSTRACT: This is a popular science talk, where I will try to revive what appears to flagging interest in theoretical computer science. My main focus will be on explaining Cook’s Theorem and its proof, but along the way, I will also address interesting questions arising from human computation, social computing and the connections between literature and the development of computational artefacts.

Diagnosing Industrial Business Processes: Early Experiences

Dr. Suman Roy ,Infosys Labs and Decision Systems Lab, UOW
Date: Thursday 21st, August, 2014
Time: 4pm onwards
Venue: 6.105 – Smart Building

ABSTRACT: Modern day enterprises rely on streamlined business processes for their smooth operation. However, lot of these processes contain errors, many of which are control flow related, {\it e.g.}, deadlock and lack of synchronization. This can provide hindrance to downstream analysis like correct simulation, code generation etc. For real-life process models other kind of errors are quite common, – these are syntactic errors which arise due to poor modeling practices. Detecting and identifying the location of occurrence of errors are equally important for correct modeling of business processes. We consider industrial business processes modeled in Business Process Modeling Notation (BPMN) and use graph-theoretic techniques and Petri net-based analyses to detect syntactic and control flow related errors respectively. Subsequently based on this, we diagnose different types of errors. We are further able to discover how error frequencies change with error depth and how they correlate with the size of the subprocesses and swim-lane interactions in the models. Such diagnostic details are vital for business process designers to detect, identify and rectify errors in their models.

Mining Enterprise Architecture Model

Ayu Saraswati ,University of Wollongong
Date: Thursday 21st, August, 2014
Time: 4pm onwards
Venue: 6.105 – Smart Building

ABSTRACT: Enterprise architecture provides a bird’s eye view of an organization. It is also a visualisation tool for organization stakeholder to manage and improve the organization. However building an enterprise architecture is a time-consuming and often highly complex task, especially since it involves several different layers, from the lowest layer (the technology) up until the most abstract level (the business view). By leveraging the history within the organization itself, we can help build the architecture based on the experience of the organization. In this paper, we proposed a method to develop enterprise architecture based on the history data.

Mining Business Process Effects

The Limits of Organization

Dr.William Tibben, University of Wollongong
Date: Thursday 31st, July, 2014
Time: 4pm onwards
Venue: 6.105 – Smart Building

ABSTRACT: Nobel Memorial Prize winner in economics Kenneth Arrow is famous for his PhD work in social choice theory which culminated in the Impossibility Theorem. His interest in the economics of information has perhaps been overshadowed by his success but is nonetheless every bit as stimulating as his work in social choice theory. A short monograph published by Arrow in 1974 called The Limits of Organization provides a penetrating insight into the role that organizations play in making the economic system work but is not given much attention. The key to understanding Arrow’s thesis on organizations is information. As an economist, Arrow seeks to portray information as having attributes that defy conventional market economics. Central to Arrow’s reasoning is the function that organizations carry out in dealing with uncertainty. Uncertainty relates to the impossibility of truly knowing the future but one can mitigate this situation, at least in a theoretical sense, by being fully informed. Information, therefore, is a key to understanding the notion of organization he develops. In summary, the organization’s fundamental role is to deal with the complexities of trying to best determine actions in an uncertain world where there is abundant access to information but selection of such information is difficult and of critical importance. One gets a better insight into Arrow’s reasoning when analyzing the concept of consensus which I will do.

SPEAKER BIO: William Tibben has been at the University of Wollongong since 2000 and is currently a Lecturer in the School of Information Systems and Technology. Before joining the university, William worked in broadcast technical training rolls in the South Pacific region during the 1990s – 4 years with the Samoa Broadcasting Service and periodic assignments for Pacific Islands Broadcasting Association (PIBA) spanning five years. Prior to this Will worked for the Australian Broadcasting Corporation in both radio and television in Sydney for 12 years. In addition to the recent completion of his PhD that investigated community technology centres in Australia, he has also completed a Masters by Research which looked into the challenges faced by broadcast technicians in isolated locations of the Pacific. William served on the board of the Pacific Island Chapter of the Internet Society from 2009 to 2012.

Declarative Choreographies for Collaborative Business Processes

Prof. Jianwen Su,Dept. of Computer Science,University of California, Santa Barbara, USA
Date: Friday 4th, July, 2014
Time: 14:30-15:30 onwards
Venue: 6.105 – Smart Building

ABSTRACT: A business process (BP) is an assembly of work activities (automated or by human performers) to accomplish a business goal. A collaborative business process (CBP) coordinates participating BPs in order to achieve a complex business objective. A CBP may employ a “mediator” process to ensure its business logic to be carried out faithfully. Such a “hub-and-spoke” approach is widely used in practice, e.g., using BPEL. In this approach, the mediator is often a bottleneck in many aspects. Alternatively, a participating BP in a “peer-to-peer” CBP communicates only with its partners based on needs and there is possibly no single BP that is aware of the global progress during execution. A “choreography” models a class of similar peer-to-peer CBPs. While this is much desired, managing executions of peer-to-peer CBPs becomes much harder; consequently, it significantly increases the demand on suitable and expressive choreography specification languages that can support design time analysis, automated realization of participating BPs, and runtime management tools. Existing choreography languages focus mostly on message sequences and are weak in modeling data shared by participants and used in sequence constraints. They also assume a fixed number of participants and make no distinction between participant type and participant instances. In this talk, we present a new declarative language based on the emerging artifact-centric BP modeling approach. The language combines first-order and linear time (LTL) logics and has three new features: (1) Each participant type is an artifact schema with its information model partially visible to choreography specification. (2) Participant instance level correlations are supported and cardinality constraints on such correlations can be explicitly defined. (3) Messages have data models that can be used in choreography constraints.

BIO: Jianwen Su is Professor of Computer Science at UCSB. He received BS/MS degrees from Fudan University, China and PhD degree from University of Southern California, USA. He held visiting positions at INRIA and Bell Labs, was/is an adjunct professor at Fudan, Peking, and Donghua Universities in China. His research concerns data modeling and query languages, scientific databases, formal verification, web services, and business process management. His current work focuses on modeling and analysis of business processes concerning compositions and management. His work on data with nested structures, incremental query evaluation, constraint databases, web services, and data-centric workflow is widely known and cited. Dr. Su received two IBM Faculty Awards, was a keynote speaker at several international workshops/conferences including ICSOC 2012 and WS-FM 2013. He served/is serving on program committees of many conferences in databases (PODS, ICDT, VLDB, ICDE, EDBT, etc.) and services computing (ICSOC, BPM, WS-FM, ICWE, ICWS, etc.). He was a general co-chair of ICSOC 2013, the general chair of SIGMOD 2001, the PC chair of PODS 2009, and a program co-chair of a few other conferences.

Fact-based Modeling using the Constellation Query Language

Clifford Heath
Date: Monday 30th, June, 2014
Time: 4pm onwards
Venue: 6.105 – Smart Building

ABSTRACT: Fact-based models are built by logical analysis of natural speech. Most human speech concerns facts and concrete examples, not often either classification or set-based constructs, as are prevalent in Semantic Web and Relational approaches respectively. As a result, domain experts find that semantic models built using fact orientation are more approachable and easily validated than either of those. Rather than applying relational calculus over sets, fact orientation applies first-order logic, which has closer ties to natural expression. Many common errors are avoided and there is less reliance on expertise and intuition. As a controlled natural language, the Constellation Query Language departs from previous fact-based languages like ORM2 and NIAM by avoiding the need to learn a specialised diagramming language and to install the associated tools. Most statements in CQL – whether it is used as a data definition language or to express a query or rule – can be read and correctly understood by an untrained speaker of English (or other natural language). The CQL implementation generates code for the established technologies of relational and non-relational databases and object-oriented programming languages. When the implementation is complete, it can replace and extend both SQL and UML, and so unify the whole field of software production from requirements elicitation, conceptual modelling and business rules, through to DBMS normalisation, implementation of business logic and answering queries (including ad-hoc end-user queries).

BIO: Clifford is a software toolmaker, product architect and designer, whose technical vision and innovations over three decades led to a number of patents, and inspired and gave birth to major enterprise software products and businesses. Now an independent consultant and research scientist, he has presented and published at international conferences including the NATO CAX Forum, and is a member of the Fact-Based Modelling Working Group along with Professors Terry Halpin and G.M. Nijssen. The FBM WG is sponsored by the European Space Agency to define a draft ISO Interchange model for fact-based models. Clifford is a Certified Data Management Professional at masters level.

Enterprise compliance architectures

Radiomics in radiotherapy: Getting the most information out of imaging information

Rapid Learning for improved decision making in Radiation Oncology

Planning and Resource Allocation of Business Process Instances

Renuka Sindhgatta, IBM Research India and Decision Systems Lab, UOW
Date: Monday 16th, June, 2014
Time: 4pm onwards
Venue: 6.105 – Smart Building

Abstract: Planning and allocation of resources to process instances can be considered as a Job Shop Scheduling problem with variations. The key challenge however, is occurrence of decision nodes in business process. The information on the set of work items (instances of tasks) that need to be planned for each case (process instance) is available only during the case execution. This discussion will introduce common resource allocation patterns and preliminary work on handling decision nodes in allocating resources to cases.

Ontology mining from real-time industrial data

A new take on business process similarity

Evan Morrison, University of Wollongong
Date: Monday 26th, May, 2014
Time: 4pm onwards
Venue: 6.105 – Smart Building

Abstract: In this talk Evan Morrison will present his latest work on a business process similarity measure that leverages both NLP label similarity and a new take on bisimulation to improve on existing matching systems that leverage graph edit distance and the jaccards coefficient.

Download Presentation File

Abstract Argumentation Theory

Risk-Aware Business Process Management

Semantic Monitoring and Run-time Compensation in Socio-Technical Processes

Yingzhi Gou, University of Wollongong
Date: Monday 28th, April, 2014
Time: 4pm onwards
Venue: 6.105 – Smart Building

Abstract: Socio-technical processes are becoming increasingly important, with the growing recognition of the computational limits of full automation, the growth in popularity of crowd sourcing, the complexity and openness of modern organizations etc. A key challenge in managing socio-technical processes is dealing with the flexible, and sometimes dynamic, nature of the execution of human-mediated tasks. It is well-recognized that human execution does not always conform to predetermined co-ordination models, and is often error-prone. This paper addresses the problem of semantically monitoring the execution of socio-technical processes to check for non-conformance, and the problem of recovering from (or compensating for) non-conformance. This paper proposes a semantic solution to the problem, by leveraging semantically annotated process models to detect non-conformance, and using the same semantic annotations to identify compensatory human-mediated tasks.

Assurance by design

Shrikant Deshpande, Desh Assurance and Risk Consulting
Monday 14th, April, 2014
Time: 4pm onwards
Venue: 6.105 – Smart Building

Abstract: Objectives: To introduce concept of assurance design as part of multidisciplinary business; technology; software and enterprise architecture design processes. Summary: Assurance is an independent opinion about a subject matter. This opinion is required by several stakeholders internal and external to organisation e.g. line management, senior management, shareholders etc. There are multiple assurance providers within the organisation e.g. quality assurance; audit; risk; information security and compliance to name a few. In a real time, high volume and high velocity transaction ecosystem traditional manual or semi-automated methods of extracting data ; examining transactions; sampling evidences and drawing conclusions about the whole ecosystem are infeasible and outmoded. There is strong need for research thinking in how technology design can align various assurance providers’ disparate and at times inefficient and wasteful methods and provide more efficient methods of assurance provisioning. Assurance requires an interdisciplinary approach and research contribution from software; enterprise architecture; service design and other disciplines

Bio: Shrikant has over 25 years of banking experience in core banking; mobile banking, Information Risk, IT Security, IT Audit and Compliance. Shrikant has held IT Management, IT Audit, IT Risk and Outsourcing Compliance positions in a career span of 20 years with Citigroup and 3 years with Westpac managing program risk and security consulting responsibilities. Shrikant has presented at professional ISACA conferences in Johannesburg; Sydney, Brisbane and Third Australasian Symposium on Service Research and Innovation (ASSRI’13) in Sydney. Shrikant is also active in academics as a guest lecturer; peer reviewer of academic papers and participant in research network. Shrikant has masters degree in systems analysis from Aston University; in UK and has professional certifications from ISACA; (CISA CRISC CGEIT); Institute of Internal Auditors (CIA) and ISC2 (CISSP).

Internal analytics: Data-driven approaches to realize the adaptive enterprise

Local Search for Constraint Satisfaction

Professor Abdul Sattar Computer Science and Artificial Intelligence, Griffith University
Monday 2nd, April, 2014
Time: 4pm onwards
Venue: 6.105 – Smart Building

Abstract: Constraint satisfaction paradigm has become a powerful approach to model complex real world problems and solve them efficiently using general-purpose constraint solving techniques. This talk will discuss how a range of problems from diverse fields can be represented as constraint satisfaction problems. Given these problems are in general computationally intractable, we argue that local search based solving methods are more suitable than backtracking based methods. We will then present some of our recent results on solving the propositional satisfiability problems, vertex cover problem, and some open issues.

Bio: Professor Abdul Sattar is the founding Director of the Institute for Integrated and Intelligent Systems and a Professor of Computer Science and Artificial Intelligence at Griffith University. He is also a Research Leader in NICTA’s Optimisation Research Group. He has been an academic staff member at Griffith University since February 1992 as a lecturer (1992-95), senior lecturer (1996-99), and professor (2000-present) within the School of Information and Communication Technology. Prior to his career at Griffith University, he was a lecturer in Physics in Rajasthan, India (1980-82), research scholar at Jawaharlal Nehru University, India (1982-85), the University of Waterloo, Canada (1985-87), and the University of Alberta, Canada (1987-1991). He has published about 200 papers in international journals and conferences, several of these papers appeared in premier conferences and journals such as IJCAI, AAAI, AIJ, JAIR, CP, AAMAS. His research team has several international awards in recent years including IJCAI 2007 Distinguished Paper award, PRICAI 2010 best paper award, Gold Medals in the 2005, 2007 and 2012 SAT solver competitions; first place in International Planning competitions in 2008 and 2011. He successfully supervised over 20 PhD students. His student won best thesis award nationally in Australia as well as internationally at ICASP 2012. His current research interests include knowledge representation and reasoning, constraint satisfaction, intelligent scheduling, rational agents, propositional satisfiability, temporal reasoning, temporal databases, and bioinformatics.

Application of Distributed Constraint Optimisation in an Agent Based Modelling problem

Automated bug classification and norm mining – part two

Introduction to Hadoop Ecosystem – part two

Soft Knowledge Management and BPM implications

Dr. Peter Busch
Monday 3rd, March, 2014
Venue: 6.105 – Smart Building

Abstract: Soft or Tacit Knowledge forms the basis of our understanding and is typically in-articulable although some such knowledge may be codified over time. Various means exist to test for such knowledge, but more importantly organisations also wish to know how well knowledge flows in organisations, that is to say how well organisations are learning. Business Process Management has established for roughly as long as Knowledge Management, but adopts a more technocratic view to managing processes in organisations, also however as an indirect means of helping organisations learn. This talk will present research over concepts of soft knowledge, how to assess it in individuals, and then means for exploring potential flows of this knowledge from one individual to the next. Techniques covered will include research approaches, grounded theory, psychometric assessment, formal concept analysis, participant observation and social network analysis. Finally we will explore potential overlaps with business process management through approaches such as workflow or process mining and petri-nets.

Bio: Dr. Peter Busch is the postgraduate coursework director for the Department of Computing at Macquarie University as well as being a Senior Lecturer. His research is in the area of knowledge management, organisational learning and business process management. His teaching covers databases, IT project and systems management, enterprise systems integration and systems analysis and design. His education is from the University of Adelaide, Monash University, the University of Tasmania, Macquarie University and most recently the University of Sydney.