Loizos Michael, Ph.D. / Publications Page


Click anywhere on this sentence to visit the Main Page!



Selected publications in reverse chronological order. Made available for research purposes only.


Scroll over an entry to see available options. Click on the icons that appear to copy the abstract or bibtex entry to the clipboard (if your browser supports this), and to download the article.


Cognitive Systems: Argument and Cognition
A. Kakas and L. Michael
IEEE Intelligent Informatics Bulletin, vol. 17 (1), pg. 14–20, 2016.
[ Invited feature article. ]
Developing systems that are aware of, and accommodate for, the cognitive capabilities and limitations of human users is emerging as a key characteristic of a new paradigm of cognitive computing in Artificial Intelligence. According to this paradigm, the behavior of such cognitive systems is modeled on the behavior of human personal assistants, able to understand the motivations and personal likings / affinities of their interlocutors, while also being able to explain, and ultimately persuade the latter about, their computed solution (e.g., a proposed action) to a problem.
This paper examines the link between argument and cognition from the psychological and the computational perspectives, and investigates how the synthesis of work on reasoning and narrative text comprehension from Cognitive Psychology and of work on computational argumentation from AI can offer a scientifically sound and pragmatic basis for building human-aware cognitive systems for everyday tasks. The paper aims, thus, to reveal how argumentation can form the science of common sense thought on which new forms of cognitive systems can be engineered.
@article{Kakas.etal_2016_ArgumentAndCognition,
     author = {Antonis Kakas and Loizos Michael},
     title = {{Cognitive Systems: Argument and Cognition}},
     journal = {IEEE Intelligent Informatics Bulletin},
     publisher = {IEEE Computer Science},
     volume = {17},
     number = {1},
     pages = {14--20},
     month = {December},
     year = {2016},
}
A preprint version of this article is available locally for download.
Logics in Artificial Intelligence, 15th European Conference, JELIA 2016, Larnaca, Cyprus, November 9-11, 2016, Proceedings
L. Michael and A. Kakas (ed.)
Springer, 2016.
These are the proceedings of the 15th European Conference on Logics in Artificial Intelligence (JELIA 2016), held during November 9–11, 2016, in Larnaca, Cyprus, and organized by the University of Cyprus and the Open University of Cyprus.
The European Conference on Logics in Artificial Intelligence (or Journees Europeennes sur la Logique en Intelligence Artificielle — JELIA) began back in 1988, as a workshop, in response to the need for a European forum for the discussion of emerging work in this field. Since then, JELIA has been organised biennially, with proceedings published in the Springer series Lecture Notes in Artificial Intelligence. Previous meetings took place in Roscoff, France (1988), Amsterdam, The Netherlands (1990), Berlin, Germany (1992), York, UK (1994), Evora, Portugal (1996), Dagstuhl, Germany (1998), Malaga, Spain (2000), Cosenza, Italy (2002), Lisbon, Portugal (2004), Liverpool, UK (2006), Dresden, Germany (2008), Helsinki, Finland (2010), Toulouse, France (2012), and Madeira, Portugal (2014).
The aim of JELIA is to bring together active researchers interested in all aspects concerning the use of logics in artificial intelligence (AI) to discuss current research, results, problems, and applications of both theoretical and practical nature. JELIA strives to foster links and facilitate cross-fertilization of ideas among researchers from various disciplines, among researchers from academia and industry, and between theoreticians and practitioners.
The increasing interest in this forum, its international level with growing participation of researchers from outside Europe, and the overall technical quality have turned JELIA into a major biennial forum for the discussion of logic-based approaches to AI.
For the 2016 edition of JELIA, authors were invited to submit papers presenting original and unpublished research in all areas related to the use of logics in AI. To encourage a discussion of the links and synergies between AI and cognitive psychology, this year's edition of JELIA encouraged submissions on logics in AI and cognition, and included invited talks related to this topic.
There were 88 submissions, each reviewed by three Program Committee members. The committee decided to accept 32 full papers for regular presentations or system demonstrations, and ten short papers for spotlight/poster presentations. The accepted papers span a number of areas within logics in AI, including: belief revision, answer set programming, argumentation, probabilistic reasoning, handling inconsistencies, temporal logics and planning, description logics, and decidability and complexity results. The program also included five invited talks by Costas Bekas, Tarek R. Besold, Marc Denecker, Torsten Schaub, and Keith Stenning.
We would like to thank the authors of all the submitted papers and the members of the Program Committee and the additional experts who helped during the reviewing process, for contributing and ensuring the high scientific quality of JELIA 2016. We would also like to acknowledge the support of the University of Cyprus, the Open University of Cyprus, the Cyprus Tourism Organisation, Austrian Airlines, IBM, Springer, and EasyChair.
September 2016
Loizos Michael
Antonis Kakas
@proceedings{Michael.etal_2016_JELIA2016Proceedings,
     editor = {Loizos Michael and Antonis Kakas},
     title = {{Logics in Artificial Intelligence, 15th European Conference, JELIA 2016, Larnaca, Cyprus, November 9-11, 2016, Proceedings}},
     publisher = {Springer},
     volume = {10021},
     month = {November},
     year = {2016},
}
Empirical Investigation of Learning-Based Imputation Policies
H. Skouteli and L. Michael
2nd Global Conference on Artificial Intelligence (GCAI 2016), 2016.
Certain approaches for missing-data imputation propose the use of learning techniques to identify regularities and relations between attributes, which are subsequently used to impute some of the missing data. Prior theoretical results suggest that the soundness and completeness of such learning-based techniques can be improved by applying rules anew on the imputed data, as long as one is careful in choosing which rules to apply at each stage. This work presents an empirical investigation of three natural learning-based imputation policies: training rules once and applying them repeatedly; training new rules at each iteration; continuing the training of previous rules at each iteration. We examine how the three policies fare across different settings. In line with the predictions of the theory, we find that an iterative learn-predict approach is preferable.
@conference{Skouteli.etal_2016_EmpiricalSLAP,
     author = {Hara Skouteli and Loizos Michael},
     title = {{Empirical Investigation of Learning-Based Imputation Policies}},
     booktitle = {Proceedings of the 2nd Global Conference on Artificial Intelligence (GCAI 2016)},
     publisher = {EasyChair},
     volume = {41},
     pages = {161--173},
     month = {September},
     year = {2016},
}
Tackling the Winograd Schema Challenge Through Machine Logical Inferences
N. Isaak and L. Michael
8th European Starting AI Researcher Symposium (STAIRS 2016), 2016.
[ System tied at first place on the WSC competition at IJCAI 2016. ]
Levesque has argued that the problem of resolving difficult pronouns in a carefully chosen set of twin sentences, which he refers to as the Winograd Schema Challenge (WSC), could serve as a conceptually and practically appealing alternative to the well-known Turing Test. As he said, probably anything that answers correctly a series of these questions is thinking in the full-bodied sense we usually reserve for people. In this paper we examine the task of resolving cases of definite pronouns. Specifically, we examine those for which traditional linguistic constraints on co-reference as well as commonly-used resolution heuristics are not useful, or the procedure they follow is very similar to a statistical approach, without invoking common logic like humans do.
@conference{Isaak.etal_2016_WSCviaLogicalInferences,
     author = {Nicos Isaak and Loizos Michael},
     title = {{Tackling the Winograd Schema Challenge Through Machine Logical Inferences}},
     booktitle = {Proceedings of the 8th European Starting AI Researcher Symposium (STAIRS 2016)},
     publisher = {IOS Press},
     volume = {284},
     pages = {75--86},
     month = {August},
     year = {2016},
}
A Hybrid Approach to Commonsense Knowledge Acquisition
C. Rodosthenous and L. Michael
8th European Starting AI Researcher Symposium (STAIRS 2016), 2016.
[ Received best paper presentation award. ]
This work presents a knowledge acquisition platform and a certain game developed on that platform for endowing machines with common sense, by following a hybrid approach that combines crowdsourcing techniques, knowledge engineering, and automated reasoning. Short narratives are presented to players, who are asked to combine fragments of text into rules that would correctly answer a given question, to evaluate the appropriateness of gathered rules, and to resolve conflicts between them by assigning priorities. The text fragments that are used are a priori translated by a knowledge engineer into a machine-readable predicate form. Players are rewarded based not only on their inter-agreement (as in most games with a purpose) but also based on the objective ability of the rules to answer questions correctly, as determined by an underlying reasoning engine. Beyond discussing the knowledge acquisition platform and the game design, we analyze the common sense that has been gathered during the deployment of the game over a five-month period and we use the acquired knowledge to answer questions on unknown stories.
@conference{Rodosthenous.etal_2016_HybridKnowledgeAcquisition,
     author = {Christos Rodosthenous and Loizos Michael},
     title = {{A Hybrid Approach to Commonsense Knowledge Acquisition}},
     booktitle = {Proceedings of the 8th European Starting AI Researcher Symposium (STAIRS 2016)},
     publisher = {IOS Press},
     volume = {284},
     pages = {111--122},
     month = {August},
     year = {2016},
}
Argumentation: Reconciling Human and Automated Reasoning
A. Kakas, L. Michael, and F. Toni
(IJCAI 2016) 2nd Workshop on Bridging the Gap between Human and Automated Reasoning, 2016.
We study how using argumentation as an alternative foundation for logic gives a framework in which we can reconcile human and automated reasoning. We analyse this reconciliation between human and automated reasoning at three levels: (1) at the level of classical, strict reasoning on which, till today, automated reasoning and computing are based, (2) at the level of natural or ordinary human level reasoning as studied in cognitive psychology and which artificial intelligence, albeit in its early stages, is endeavouring to automate, and (3) at the level of the recently emerged cognitive computing paradigm where systems are required to be cognitively compatible with human reasoning based on common sense or expert knowledge, machine-learned from unstructured data in corpora over the web or other sources.
@conference{Kakas.etal_2016_ArgumentationReconciliation,
     author = {Antonis Kakas and Loizos Michael and Francesca Toni},
     title = {{Argumentation: Reconciling Human and Automated Reasoning}},
     booktitle = {Proceedings of the (IJCAI 2016) 2nd Workshop on Bridging the Gap between Human and Automated Reasoning},
     publisher = {CEUR-WS.org},
     volume = {1651},
     pages = {43--60},
     month = {July},
     year = {2016},
}
A preprint version of this article is available locally for download.
Cognitive Reasoning and Learning Mechanisms
L. Michael
(BICA 2016) 4th International Workshop on Artificial Intelligence and Cognition (AIC 2016), 2016.
With an eye towards the development of systems for common everyday tasks — that operate in a manner that is cognitively-compatible with the argumentative nature of human reasoning — we present mechanisms for reasoning with and learning cognitive programs: common sense, symbolically represented in the form of prioritized association rules. The FLASH mechanism supports a fast argumentation-flavored type of reasoning, while sidestepping the rigidness of traditional proof procedures in formal logic. The NERD mechanism supports the never-ending acquisition of cognitive programs, without human supervision.
@conference{Michael_2016_CognitiveMechanisms,
     author = {Loizos Michael},
     title = {{Cognitive Reasoning and Learning Mechanisms}},
     booktitle = {Proceedings of the (BICA 2016) 4th International Workshop on Artificial Intelligence and Cognition (AIC 2016)},
     year = {2016},
}
A preprint version of this article is available locally for download.
Cognitive Programming
L. Michael, A. Kakas, R. Miller, and G. Turán
3rd International Workshop on Artificial Intelligence and Cognition (AIC 2015), 2015.
The widespread access to computing-enabled devices and the World Wide Web has, in a sense, liberated the ordinary user from reliance on technically-savvy experts. To complete this emancipation, a new way of interacting with, and controlling the behavior of, computing-enabled devices is needed. This position paper argues for the adoption of cognitive programming as the paradigm for this user-machine interaction, whereby the machine is no longer viewed as a tool at the disposal of the user, but as an assistant capable of being supervised and guided by the user in a natural and continual manner, and able to acquire and employ common sense to help the user in the completion of everyday tasks. We argue that despite the many challenges that the proposed paradigm presents, recent advances in several key areas of Artificial Intelligence, along with lessons learned from work in Psychology, give reasons for optimism.
@conference{Michael.etal_2015_CognitiveProgramming,
     author = {Loizos Michael and Antonis Kakas and Rob Miller and Gy\"orgy Tur\'an},
     title = {{Cognitive Programming}},
     booktitle = {Proceedings of the 3rd International Workshop on Artificial Intelligence and Cognition (AIC 2015)},
     publisher = {CEUR-WS.org},
     volume = {1510},
     pages = {3--18},
     month = {September},
     year = {2015},
}
A preprint version of this article is available locally for download.
Jumping to Conclusions
L. Michael
(IJCAI 2015) 2nd International Workshop on Defeasible and Ampliative Reasoning (DARe 2015), 2015.
Inspired by the profound effortlessness (but also the substantial carelessness) with which humans seem to draw inferences when given even only partial information, we consider a unified formal framework for computational cognition, placing our emphasis on the existence of naturalistic mechanisms for representing, manipulating, and acquiring knowledge. Through formal results and discussion, we suggest that such fast and loose mechanisms could provide a concrete basis for the design of cognitive systems.
@conference{Michael_2015_JumpingToConclusions,
     author = {Loizos Michael},
     title = {{Jumping to Conclusions}},
     booktitle = {Proceedings of the (IJCAI 2015) 2nd International Workshop on Defeasible and Ampliative Reasoning (DARe 2015)},
     volume = {1423},
     month = {July},
     year = {2015},
}
A preprint version of this article is available locally for download.
The Disembodied Predictor Stance
L. Michael
Pattern Recognition Letters, vol. 64 (C), pg. 21–29, 2015.
[ Special Issue 'Philosophical Aspects of Pattern Recognition'. ]
Pattern recognition is typically described as the discipline investigating how to recognize patterns and regularities in data, with the description leaving tacit that these patterns and regularities are somehow exploited, applied, acted upon, or simply announced once recognized. The aforementioned omission is more than a linguistic one, and is reflected on the emphasis that technical, theoretical, and empirical work on pattern recognition places on the predictors it develops, analyzes, and deploys. Most research on pattern recognition adopts, effectively, a stance amounting to treating the predictors as being disembodied, taken to mean that they operate without affecting the environment about which they make predictions. This essay argues for the dismissal of this stance, and demonstrates that the adoption of an embodied predictor stance is philosophically and technically not only possible, but also desirable.
@article{Michael_2015_DisembodiedPredictors,
     author = {Loizos Michael},
     title = {{The Disembodied Predictor Stance}},
     journal = {Pattern Recognition Letters},
     publisher = {Elsevier},
     volume = {64},
     number = {C},
     pages = {21--29},
     month = {October},
     year = {2015},
}
A preprint version of this article is available locally for download.
Introspective Forecasting
L. Michael
24th International Joint Conference on Artificial Intelligence (IJCAI 2015), 2015.
Science ultimately seeks to reliably predict aspects of the future; but, how is this even possible in light of the logical paradox that making a prediction may cause the world to evolve in a manner that defeats it? We show how learning can naturally resolve this conundrum. The problem is studied within a causal or temporal version of the Probably Approximately Correct semantics, extended so that a learner's predictions are first recorded in the states upon which the learned hypothesis is later applied. On the negative side, we make concrete the intuitive impossibility of predicting reliably, even under very weak assumptions. On the positive side, we identify conditions under which a generic learning schema, akin to randomized trials, supports agnostic learnability.
@conference{Michael_2015_IntrospectiveForecasting,
     author = {Loizos Michael},
     title = {{Introspective Forecasting}},
     booktitle = {Proceedings of the 24th International Joint Conference on Artificial Intelligence (IJCAI 2015)},
     publisher = {AAAI Press / IJCAI},
     pages = {3714--3720},
     month = {July},
     year = {2015},
}
A preprint version of this article is available locally for download.
STAR: A System of Argumentation for Story Comprehension and Beyond
I.-A. Diakidoy, A. Kakas, L. Michael, and R. Miller
12th International Symposium on Logical Formalizations of Commonsense Reasoning (Commonsense 2015), 2015.
This paper presents the STAR system, a system for automated narrative comprehension, developed on top of an argumentation-theoretic formulation of defeasible reasoning, and strongly following guidelines from the psychology of comprehension. We discuss the system's use in psychological experiments on story comprehension, and our plans for its broader use in empirical studies concerning wider issues of commonsense reasoning.
@conference{Diakidoy.etal_2015_STARSystem,
     author = {Irene-Anna Diakidoy and Antonis Kakas and Loizos Michael and Rob Miller},
     title = {{STAR: A System of Argumentation for Story Comprehension and Beyond}},
     booktitle = {Proceedings of the 12th International Symposium on Logical Formalizations of Commonsense Reasoning (Commonsense 2015)},
     pages = {64--70},
     month = {March},
     year = {2015},
}
A preprint version of this article is available locally for download.
Narrative Similarity as Common Summary: Evaluation of Behavioral and Computational Aspects
E. Kypridemou and L. Michael
Literary and Linguistic Computing, vol. 29 (4), pg. 532–560, 2014.
[ Special Issue 'Computational Models of Narrative'. ]
One of the main efforts of recent computational linguistics is to formalize the process of identifying and evaluating similarity between narratives, which is argued to be a key concept for all human behavior. Analyses of the data of 52 adults that participated in our empirical study offered evidence that supports the position that narrative similarity can be equated to the existence of a common summary between the narratives involved. As a further step for applying the above hypothesis, we introduced our own methods for measuring similarity of narratives through the notion of summary and compared them with some of the existing lexical-matching similarity measures. Comparisons of each computational measure with the actual similarity judgments of human participants revealed that methods that merely measure the number of shared words between two stories were unable to capture human similarity judgments, especially for stories of moderate similarity levels. However, the summary-based methods of our approach managed to reproduce human ratings for story pairs across all the range of similarity (from high similarity to low similarity).
@article{Kypridemou.etal_2014_NarrativeSimilarityExperiments,
     author = {Elektra Kypridemou and Loizos Michael},
     title = {{Narrative Similarity as Common Summary: Evaluation of Behavioral and Computational Aspects}},
     journal = {Literary and Linguistic Computing},
     publisher = {Oxford University Press},
     volume = {29},
     number = {4},
     pages = {532--560},
     month = {December},
     year = {2014},
}
Story Comprehension through Argumentation
I.-A. Diakidoy, A. Kakas, L. Michael, and R. Miller
5th International Conference on Computational Models of Argument (COMMA 2014), 2014.
This paper presents a novel application of argumentation for automated Story Comprehension (SC). It uses argumentation to develop a computational approach for SC as this is understood and studied in psychology. Argumentation provides uniform solutions to various representational and reasoning problems required for SC such as the frame, ramification, and qualification problems, as well as the problem of contrapositive reasoning with default information. The grounded semantics of argumentation provides a suitable basis for the construction and revision of comprehension models, through the synthesis of the explicit information from the narrative in the text with the implicit (in the reader's mind) common sense world knowledge pertaining to the topic(s) of the story given in the text. We report on the empirical evaluation of the approach through a prototype system and its ability to capture both the majority and the variability of understanding of stories by human readers. This application of argumentation can provide an important test-bed for the more general development of computational argumentation.
@conference{Diakidoy.etal_2014_StoryComprehension,
     author = {Irene-Anna Diakidoy and Antonis Kakas and Loizos Michael and Rob Miller},
     title = {{Story Comprehension through Argumentation}},
     booktitle = {Proceedings of the 5th International Conference on Computational Models of Argument (COMMA 2014)},
     publisher = {IOS Press},
     volume = {266},
     pages = {31--42},
     month = {September},
     year = {2014},
}
A preprint version of this article is available locally for download.
Gathering Background Knowledge for Story Understanding through Crowdsourcing
C. T. Rodosthenous and L. Michael
5th Workshop on Computational Models of Narrative (CMN 2014), 2014.
Successfully comprehending stories involves integration of the story information with the reader's own background knowledge. A prerequisite, then, of building automated story understanding systems is the availability of such background knowledge. We take the approach that knowledge appropriate for story understanding can be gathered by sourcing the task to the crowd. Our methodology centers on breaking this task into a sequence of more specific tasks, so that human participants not only identify relevant knowledge, but also convert it into a machine-readable form, generalize it, and evaluate its appropriateness. These individual tasks are presented to human participants as missions in an online game, offering them, in this manner, an incentive for their participation. We report on an initial deployment of the game, and discuss our ongoing work for integrating the knowledge gathering task into a full-fledged story understanding engine.
@conference{Rodosthenous.etal_2014_CrowdsourcedKnowledgeGathering,
     author = {Christos T. Rodosthenous and Loizos Michael},
     title = {{Gathering Background Knowledge for Story Understanding through Crowdsourcing}},
     booktitle = {Proceedings of the 5th Workshop on Computational Models of Narrative (CMN 2014)},
     publisher = {Schloss Dagstuhl--Leibniz-Zentrum fuer Informatik},
     volume = {41},
     pages = {154--163},
     month = {July--August},
     year = {2014},
}
Simultaneous Learning and Prediction
L. Michael
14th International Conference on Principles of Knowledge Representation and Reasoning (KR 2014), 2014.
Agents in real-world environments may have only partial access to available information, often in an arbitrary, or hard to model, manner. By reasoning with knowledge at their disposal, agents may hope to recover some missing information. By acquiring the knowledge through a process of learning, the agents may further hope to guarantee that the recovered information is indeed correct.
Assuming only a black-box access to a learning process and a prediction process that are able to cope with missing information in some principled manner, we examine how the two processes should interact so that they improve their overall joint performance. We identify natural scenarios under which the interleaving of the processes is provably beneficial over their independent use.
@conference{Michael_2014_SLAP,
     author = {Loizos Michael},
     title = {{Simultaneous Learning and Prediction}},
     booktitle = {Proceedings of the 14th International Conference on Principles of Knowledge Representation and Reasoning (KR 2014)},
     publisher = {AAAI Press},
     pages = {348--357},
     month = {July},
     year = {2014},
}
A preprint version of this article is available locally for download.
Write Like I Write: Herding in the Language of Online Reviews
L. Michael and J. Otterbacher
8th International AAAI Conference on Weblogs and Social Media (ICWSM 2014), 2014.
Our behaviors often converge with those of others, and language within social media is no exception. We consider reviews of tourist attractions at TripAdvisor (TA), the world's largest resource for travel information. Unlike social networking sites, TA review forums do not facilitate direct interaction between participants. Nonetheless, theory suggests that language is guided by writers' conception of their audience, and that their style can shift in response. We implement a model of herding as a local transmission process, exploring the hypothesis that a reviewer is influenced by how preceding reviews manifest a given stylistic feature (e.g., pronouns, paralinguistic devices). We find that reviewers are more likely to use unusual features when such characteristics appear in their local context. The extent to which reviewers are influenced by context is correlated to attributes shared in their profiles, as well as their sentiment toward the attraction reviewed. Our results suggest that language can be influenced by others, even in an asynchronous environment with little to no interpersonal interaction. In other words, our behaviors may be susceptible to manipulation in social media; it may not always be the case that we write like ourselves.
@conference{Michael.etal_2014_ReviewLanguageHerding,
     author = {Loizos Michael and Jahna Otterbacher},
     title = {{Write Like I Write: Herding in the Language of Online Reviews}},
     booktitle = {Proceedings of the 8th International AAAI Conference on Weblogs and Social Media (ICWSM 2014)},
     publisher = {AAAI Press},
     pages = {356--365},
     month = {June},
     year = {2014},
}
A preprint version of this article is available locally for download.
An Empirical Investigation of Ceteris Paribus Learnability
L. Michael and E. Papageorgiou
23rd International Joint Conference on Artificial Intelligence (IJCAI 2013), 2013.
Eliciting user preferences constitutes a major step towards developing recommender systems and decision support tools. Assuming that preferences are ceteris paribus allows for their concise representation as Conditional Preference Networks (CP-nets). This work presents the first empirical investigation of an algorithm for reliably and efficiently learning CP-nets in a manner that is minimally intrusive. At the same time, it introduces a novel process for efficiently reasoning with (the learned) preferences.
@conference{Michael.etal_2013_CPNetsExperiments,
     author = {Loizos Michael and Elena Papageorgiou},
     title = {{An Empirical Investigation of Ceteris Paribus Learnability}},
     booktitle = {Proceedings of the 23rd International Joint Conference on Artificial Intelligence (IJCAI 2013)},
     publisher = {AAAI Press / IJCAI},
     pages = {1537--1543},
     month = {August},
     year = {2013},
}
A preprint version of this article is available locally for download.
Auctioning License Plate Numbers
L. Michael and D. Stavrou
(IJCAI 2013) 7th Multidisciplinary Workshop on Advances in Preference Handling (M-Pref 2013), 2013.
The Road Transport Department (RTD) of Cyprus utilizes a certain mechanism for auctioning multiple distinguished license plate numbers so that each vehicle owner receives at most one number. In investigating this particular auction, we: (i) propose an appropriate natural bidding language, and establish the complexity of winner determination; (ii) represent the RTD mechanism and a proposed oneshot mechanism in a formal framework designed for modelling economic environments; (iii) implement agents with bidding behavior typical to humans using the RTD mechanism. Simulations show that the bidding language supports complex interactions, and that the one-shot mechanism generally leads to improved revenue and social welfare.
@conference{Michael.etal_2013_LicensePlateAuctioning,
     author = {Loizos Michael and Demetra Stavrou},
     title = {{Auctioning License Plate Numbers}},
     booktitle = {Working notes of the (IJCAI 2013) 7th Multidisciplinary Workshop on Advances in Preference Handling (M-Pref 2013)},
     month = {August},
     year = {2013},
}
A preprint version of this article is available locally for download.
Machines with WebSense
L. Michael
11th International Symposium on Logical Formalizations of Commonsense Reasoning (Commonsense 2013), 2013.
We discuss the design and development of a novel web search engine able to respond to user queries with inferences that follow from the collective human knowledge found across the Web. The engine's knowledge — or websense — is represented and reasoned with in a logical fashion, and is acquired autonomously via principled learning, so that the soundness of the supported inferences can be a priori guaranteed through a formal analysis of the learning method used. This engine brings together the traditional AI goal of endowing machines with common sense, and the contemporary AI goal of making sense of the Web through machine reading.
@conference{Michael_2013_Websense,
     author = {Loizos Michael},
     title = {{Machines with WebSense}},
     booktitle = {Working notes of the 11th International Symposium on Logical Formalizations of Commonsense Reasoning (Commonsense 2013)},
     month = {May},
     year = {2013},
}
A preprint version of this article is available locally for download.
Story Understanding... Calculemus!
L. Michael
11th International Symposium on Logical Formalizations of Commonsense Reasoning (Commonsense 2013), 2013.
Stories lie at the heart of the human experience, and efforts to comprehend the latter can be aided by investigating how humans understand stories. We take a computational view of this problem, and suggest that a form of logic can be fruitfully used to represent and reason with stories. Instead of seeking to define what counts as story understanding, we investigate tasks typically associated with story understanding (e.g., question answering), and demonstrate how those can be formalized and tackled within the logic-based framework we propose.
@conference{Michael_2013_StoryCalculemus,
     author = {Loizos Michael},
     title = {{Story Understanding... Calculemus!}},
     booktitle = {Working notes of the 11th International Symposium on Logical Formalizations of Commonsense Reasoning (Commonsense 2013)},
     month = {May},
     year = {2013},
}
A preprint version of this article is available locally for download.
Congestion Control in Wireless Sensor Networks based on Bird Flocking Behaviour
P. Antoniou, A. Pitsillides, T. Blackwell, A. Engelbrecht, and L. Michael
Computer Networks, vol. 57 (5), pg. 1167–1191, 2013.
This paper proposes that the flocking behavior of birds can guide the design of a robust, scalable and self-adaptive congestion control protocol in the context of wireless sensor networks (WSNs). The proposed approach adopts a swarm intelligence paradigm inspired by the collective behavior of bird flocks. The main idea is to 'guide' packets (birds) to form flocks and flow towards the sink (global attractor), whilst trying to avoid congestion regions (obstacles). The direction of motion of a packet flock is influenced by repulsion and attraction forces between packets, as well as the field of view and the artificial magnetic field in the direction of the artificial magnetic pole (sink). The proposed approach is simple to implement at the individual node, involving minimal information exchange. In addition, it displays global self-? properties and emergent behavior, achieved collectively without explicitly programming these properties into individual packets. Performance evaluations show the effectiveness of the proposed Flock-based Congestion Control (Flock-CC) mechanism in dynamically balancing the offered load by effectively exploiting available network resources and moving packets to the sink. Furthermore, Flock-CC provides graceful performance degradation in terms of packet delivery ratio, packet loss, delay and energy tax under low, high and extreme traffic loads. In addition, the proposed approach achieves robustness against failing nodes, scalability in different network sizes and outperforms typical conventional approaches.
@article{Antoniou.etal_2013_FlockRouting,
     author = {Pavlos Antoniou and Andreas Pitsillides and Tim Blackwell and Andries Engelbrecht and Loizos Michael},
     title = {{Congestion Control in Wireless Sensor Networks based on Bird Flocking Behaviour}},
     journal = {Computer Networks},
     publisher = {Elsevier},
     volume = {57},
     number = {5},
     pages = {1167--1191},
     month = {April},
     year = {2013},
}
''Primum Non Nocere'' for Personalized Education
L. Michael
(NIPS 2012) Workshop on Personalizing Education With Machine Learning (PEwML 2012), 2012.
In offering personalized education, one should be wary of the effects that an educational intervention may have on a student. Much like in medical ethics, we posit that the principle of non-maleficence should be a cornerstone in the design of educational systems. We provide a high-level and rather non-technical overview of ongoing work, in which a learning-theoretic framework that can be used to investigate and address the issue of interest herein, is proposed.
@conference{Michael_2012_PrimumNonNocere,
     author = {Loizos Michael},
     title = {{``Primum Non Nocere'' for Personalized Education}},
     booktitle = {Working notes of the (NIPS 2012) Workshop on Personalizing Education With Machine Learning (PEwML 2012)},
     month = {December},
     year = {2012},
}
A preprint version of this article is available locally for download.
Evolvability via the Fourier Transform
L. Michael
Theoretical Computer Science, vol. 462, pg. 88–98, 2012.
[ Originally presented at China Theory Week 2007; invited participation and presentation. ]
The evolvability framework is a computational theory proposed by Valiant as a quantitative tool for the study of evolution. We explore in this work a natural generalization of Valiant's framework: an organism's genome is regarded as representing the Fourier spectrum of a real-valued function that the organism computes. A performance function is suggested that averages in a certain way the organism's responses over the distribution of its experiences. We show that this generalization supports the existence of an efficient, conceptually simple and direct evolutionary mechanism. More concretely, we consider the case where the ideal behavior that an organism strives to approximate is encoded by some decision list, and establish the evolvability of decision lists with respect to the suggested performance metric, over the uniform probability distribution.
In accordance with biological evidence on how genomes mutate, the evolutionary mechanism we propose performs only simple operations on the organism's genome to obtain mutated genomes. The surviving genome is selected greedily among genomes in the current generation based only on performance. A sustained performance improvement is ensured, at a fixed and predictable rate across generations, and a highly fit genome is evolved in a number of generations independent of the size of the ideal function, and determined only by the required approximation degree. Furthermore, the size of the genome grows logarithmically in the number of environmental attributes. None of these rather stringent, and presumably biologically desirable properties are enforced by the baseline evolvability framework, nor are these properties possessed by other early evolvability results.
@article{Michael_2012_EvolvabilityFourierTransform,
     author = {Loizos Michael},
     title = {{Evolvability via the Fourier Transform}},
     journal = {Theoretical Computer Science},
     publisher = {Elsevier},
     volume = {462},
     pages = {88--98},
     month = {November},
     year = {2012},
}
A preprint version of this article is available locally for download.
An Ant-Based Computer Simulator
L. Michael and A. Yiannakides
13th International Conference on the Synthesis and Simulation of Living Systems (ALife 2012), 2012.
Collaboration in nature is often illustrated through the collective behavior of ants. Although entomological studies have shown that certain goals are well-served by this collective behavior, the extent of what such a collaboration can achieve is not immediately clear. We extend past work that has argued that ants are, in principle, able to collectively compute logical circuits, and illustrate through a simulation that indeed such computations can be carried out meaningfully and robustly in situations that involve complex interactions between the ants.
@conference{Michael.etal_2012_AntComputerSimulator,
     author = {Loizos Michael and Anastasios Yiannakides},
     title = {{An Ant-Based Computer Simulator}},
     booktitle = {Proceedings of the 13th International Conference on the Synthesis and Simulation of Living Systems (ALife 2012)},
     publisher = {MIT Press},
     pages = {210--217},
     month = {July},
     year = {2012},
}
A preprint version of this article is available locally for download.
Missing Information Impediments to Learnability
L. Michael
24th Annual Conference on Learning Theory (COLT 2011), 2011.
[ Revised papers from the conference. ]
To what extent is learnability impeded when information is missing in learning instances? We present relevant known results and concrete open problems, in the context of a natural extension of the PAC learning model that accounts for arbitrarily missing information.
@conference{Michael_2011_MissingInfoImpedimentsToPAC,
     author = {Loizos Michael},
     title = {{Missing Information Impediments to Learnability}},
     booktitle = {Proceedings of the 24th Annual Conference on Learning Theory (COLT 2011)},
     volume = {19},
     pages = {825--827},
     month = {July},
     year = {2011},
}
A preprint version of this article is available locally for download.
Causal Learnability
L. Michael
22nd International Joint Conference on Artificial Intelligence (IJCAI 2011), 2011.
The ability to predict, or at least recognize, the state of the world that an action brings about, is a central feature of autonomous agents. We propose, herein, a formal framework within which we investigate whether this ability can be autonomously learned.
The framework makes explicit certain premises that we contend are central in such a learning task: (i) slow sensors may prevent the sensing of an action's direct effects during learning; (ii) predictions need to be made reliably in future and novel situations.
We initiate in this work a thorough investigation of the conditions under which learning is or is not feasible. Despite the very strong negative learnability results that we obtain, we also identify interesting special cases where learning is feasible and useful.
@conference{Michael_2011_CausalLearnability,
     author = {Loizos Michael},
     title = {{Causal Learnability}},
     booktitle = {Proceedings of the 22nd International Joint Conference on Artificial Intelligence (IJCAI 2011)},
     publisher = {AAAI Press},
     pages = {1014--1020},
     month = {July},
     year = {2011},
}
A preprint version of this article is available locally for download.
A Unified Argumentation-Based Framework for Knowledge Qualification
L. Michael and A. Kakas
10th International Symposium on Logical Formalizations of Commonsense Reasoning (Commonsense 2011), 2011.
Among the issues faced by an intelligent agent, central is that of reconciling the, often contradictory, pieces of knowledge — be those given, learned, or sensed — at its disposal. This problem, known as knowledge qualification, requires that pieces of knowledge deemed reliable in some context be given preference over the others.

These preferences are typically viewed as encodings of reasoning patterns; so, the frame axiom can be encoded as a preference of persistence over spontaneous change. Qualification, then, results by the principled application of these preferences. We illustrate how this can be naturally done through argumentation, by uniformly treating object-level knowledge and reasoning patterns alike as arguments that can be defeated by other stronger ones. We formulate an argumentation framework for Reasoning about Actions and Change that gives a semantics for Action Theories that include a State Default Theory.
Due to their explicit encoding as preferences, reasoning patterns can be adapted, when and if needed, by a domain designer to suit a specific application domain. Furthermore, the reasoning patterns can be defeated in lieu of stronger external evidence, allowing, for instance, the frame axiom to be overridden when unexpected sensory information suggests that spontaneous change may have broken persistence in a particular situation.

@conference{Michael.etal_2011_KnowledgeQualification,
     author = {Loizos Michael and Antonis Kakas},
     title = {{A Unified Argumentation-Based Framework for Knowledge Qualification}},
     booktitle = {Working notes of the 10th International Symposium on Logical Formalizations of Commonsense Reasoning (Commonsense 2011)},
     pages = {83--89},
     month = {March},
     year = {2011},
}
A preprint version of this article is available locally for download.
Modular-E and the Role of Elaboration Tolerance in Solving the Qualification Problem
A. C. Kakas, L. Michael, and R. Miller
Artificial Intelligence, vol. 175 (1), pg. 49–78, 2011.
[ Invited to Special Issue 'Logical Approaches to Artificial Intelligence: Papers in Honor of John McCarthy'. ]
We describe Modular-E (ME), a specialized, model-theoretic logic for reasoning about actions. ME is able to represent non-deterministic domains involving concurrency, static laws (constraints), indirect effects (ramifications), and narrative information in the form of action occurrences and observations along a time line. We give formal results which characterize ME's high degree of modularity and elaboration tolerance, and show how these properties help to separate out, and provide principled solutions to, different aspects of the qualification problem. In particular, we identify the endogenous qualification problem as the problem of properly accounting for highly distributed, and potentially conflicting, causal knowledge when reasoning about the effects of actions. We show how a comprehensive solution to the endogenous qualification problem helps simplify the exogenous qualification problem — the problem of reconciling conflicts between predictions about what should be true at particular times and actual observations. More precisely, we describe how ME is able to use straightforward default reasoning techniques to solve the exogenous qualification problem largely because its robust treatments of the frame, ramification and endogenous qualification problems combine into a particular characteristic of elaboration tolerance that we formally encapsulate as a notion of ''free will''.
@article{Kakas.etal_2011_ModularEQualification,
     author = {Antonis C. Kakas and Loizos Michael and Rob Miller},
     title = {{Modular-E and the Role of Elaboration Tolerance in Solving the Qualification Problem}},
     journal = {Artificial Intelligence},
     publisher = {Elsevier},
     volume = {175},
     number = {1},
     pages = {49--78},
     month = {January},
     year = {2011},
}
A preprint version of this article is available locally for download.
Partial Observability and Learnability
L. Michael
Artificial Intelligence, vol. 174 (11), pg. 639–669, 2010.
When sensing its environment, an agent often receives information that only partially describes the current state of affairs. The agent then attempts to predict what it has not sensed, by using other pieces of information available through its sensors. Machine learning techniques can naturally aid this task, by providing the agent with the rules to be used for making these predictions. For this to happen, however, learning algorithms need to be developed that can deal with missing information in the learning examples in a principled manner, and without the need for external supervision. We investigate this problem herein. We show how the Probably Approximately Correct semantics can be extended to deal with missing information during both the learning and the evaluation phase. Learning examples are drawn from some underlying probability distribution, but parts of them are hidden before being passed to the learner. The goal is to learn rules that can be used to accurately recover information hidden in these learning examples.
We show that for this to be done, one should first dispense the requirement that rules should always make definite predictions; ''don't know'' is sometimes necessitated. On the other hand, such abstentions should not be done freely, but only when sufficient information is not present for definite predictions to be made. Under this premise, we show that to accurately recover missing information, it suffices to learn rules that are highly consistent, i.e., rules that simply do not contradict the agent's sensory inputs. It is established that high consistency implies a somewhat discounted accuracy, and that this discount is, in some defined sense, unavoidable, and depends on how adversarially information is hidden in the learning examples.
Within our proposed learning model we prove that any class of monotone or read-once formulas that is PAC learnable (from complete learning examples) is also learnable (from incomplete learning examples). By contrast, we prove that parities and monotone-term 1-decision lists, which are properly PAC learnable, are not properly learnable under the new learning model. In the process of establishing our positive and negative results, we re-derive some basic PAC learnability machinery, such as Occam's Razor, and reductions between learning tasks. We finally consider a special case of learning from partial learning examples, where some prior bias exists on the manner in which information is hidden, and show how this provides a unified view of many previous learning models that deal with missing information.
We suggest that the proposed learning model goes beyond a simple extension of supervised learning to the case of incomplete learning examples. The principled and general treatment of missing information during learning, we argue, allows an agent to employ learning entirely autonomously, without relying on the presence of an external teacher, as is the case in supervised learning. We call our learning model autodidactic to emphasize the explicit disassociation of this model from any form of external supervision.
@article{Michael_2010_PartiallyObservablePAC,
     author = {Loizos Michael},
     title = {{Partial Observability and Learnability}},
     journal = {Artificial Intelligence},
     publisher = {Elsevier},
     volume = {174},
     number = {11},
     pages = {639--669},
     month = {July},
     year = {2010},
}
A preprint version of this article is available locally for download.
Specifying and Monitoring Economic Environments using Rights and Obligations
L. Michael, D. C. Parkes, and A. Pfeffer
Autonomous Agents and Multi-Agent Systems, vol. 20 (2), pg. 158–197, 2010.
We provide a formal scripting language to capture the semantics of economic environments. The language is based on a set of well-defined design principles and makes explicit an agent's rights, as derived from property, and an agent's obligations, as derived from restrictions placed on its actions either voluntarily or as a consequence of other actions. Coupled with the language is a run-time system that is able to monitor and enforce rights and obligations in an agentmediated economic environment. The framework allows an agent to formally express guarantees (obligations) in relation to its actions, and the run-time system automatically checks that these obligations are met and verifies that an agent has appropriate rights before executing an action.

Rights and obligations are viewed as first-class goods that can be transferred from one agent to another. This treatment makes it easy to define natural and expressive recursive statements, so that, for instance, one may have rights or obligations in selling or trading some other right or obligation. We define fundamental axioms about well-functioning markets in terms of rights and obligations, and delineate the difference between ownership and possession, arguably two of the most important notions in economic markets. The framework provides a rich set of action-related constructs for modeling conditional and non-deterministic effects, and introduces the use of transactions to safely bundle actions, including the issuing of rights and taking on of obligations. By way of example, we show that our language can represent a variety of economic mechanisms, ranging from simple two-agent single-good exchanges to complicated combinatorial auctions. The framework, which is fully implemented, can be used to formalize the semantics of markets; as a platform for prototyping, testing and evaluating agent-mediated markets; and also provide a basis for deploying an electronic market.

@article{Michael.etal_2010_RightsAndObligations,
     author = {Loizos Michael and David C. Parkes and Avi Pfeffer},
     title = {{Specifying and Monitoring Economic Environments using Rights and Obligations}},
     journal = {Autonomous Agents and Multi-Agent Systems},
     publisher = {Springer Netherlands},
     volume = {20},
     number = {2},
     pages = {158--197},
     month = {March},
     year = {2010},
}
A preprint version of this article is available locally for download.
Knowledge Qualification through Argumentation
L. Michael and A. C. Kakas
10th International Conference on Logic Programming and Nonmonotonic Reasoning (LPNMR 2009), 2009.
We propose a framework that brings together two major forms of default reasoning in Artificial Intelligence: default property classification in static domains, and default property persistence in temporal domains. Emphasis in this work is placed on the qualification problem, central when dealing with default reasoning, and in any attempt to integrate different forms of such reasoning.
Our framework can be viewed as offering a semantics to two natural problems: (i) that of employing default static knowledge in a temporal setting, and (ii) the dual one of temporally projecting and dynamically updating default static knowledge. The proposed integration is introduced through a series of example domains, and is then formalized through argumentation. The semantics follows a pragmatic approach. At each time-point, an agent predicts the next state of affairs. As long as this is consistent with the available observations, the agent continues to reason forward. In case some of the observations cannot be explained without appealing to some exogenous reason, the agent revisits and revises its past assumptions.
We conclude with some formal results, including an algorithm for computing complete admissible argument sets, and a proof of elaboration tolerance, in the sense that additional knowledge can be gracefully accommodated in any domain.
@conference{Michael.etal_2009_KnowledgeQualification,
     author = {Loizos Michael and Antonis C. Kakas},
     title = {{Knowledge Qualification through Argumentation}},
     booktitle = {Proceedings of the 10th International Conference on Logic Programming and Nonmonotonic Reasoning (LPNMR 2009)},
     publisher = {Springer},
     volume = {5753},
     pages = {209--222},
     month = {September},
     year = {2009},
}
A preprint version of this article is available locally for download.
Ceteris Paribus Preference Elicitation with Predictive Guarantees
Y. Dimopoulos, L. Michael, and F. Athienitou
21st International Joint Conference on Artificial Intelligence (IJCAI 2009), 2009.
CP-networks have been proposed as a simple and intuitive graphical tool for representing conditional ceteris paribus preference statements over the values of a set of variables. While the problem of reasoning with CP-networks has been receiving some attention, there are very few works that address the problem of learning CP-networks.
In this work we investigate the task of learning CP-networks, given access to a set of pairwise comparisons. We first prove that the learning problem is intractable, even under several simplifying assumptions. We then present an algorithm that, under certain assumptions about the observed pairwise comparisons, identifies a CP-network that entails these comparisons. We finally show that the proposed algorithm is a PAC-learner, and, thus, that the CP-networks it induces accurately predict the user's preferences on previously unseen situations.
@conference{Dimopoulos.etal_2009_CPNetLearning,
     author = {Yannis Dimopoulos and Loizos Michael and Fani Athienitou},
     title = {{Ceteris Paribus Preference Elicitation with Predictive Guarantees}},
     booktitle = {Proceedings of the 21st International Joint Conference on Artificial Intelligence (IJCAI 2009)},
     pages = {1890--1895},
     month = {July},
     year = {2009},
}
A preprint version of this article is available locally for download.
Reading Between the Lines
L. Michael
21st International Joint Conference on Artificial Intelligence (IJCAI 2009), 2009.
Reading involves,among others, identifying what is implied but not expressed in text. This task, known as textual entailment, offers a natural abstraction for many NLP tasks, and has been recognized as a central tool for the new area of Machine Reading. Important in the study of textual entailment is making precise the sense in which something is implied by text. The operational definition often employed is a subjective one: something is implied if humans are more likely to believe it given the truth of the text, than otherwise. In this work we propose a natural objective definition for textual entailment.
Our approach is to view text as a partial depiction of some underlying hidden reality. Reality is mapped into text through a possibly stochastic process, the author of the text. Textual entailment is then formalized as the task of accurately, in a defined sense, recovering information about this hidden reality.
We show how existing machine learning work can be applied to this information recovery setting, and discuss the implications for the construction of machines that autonomously engage in textual entailment. We then investigate the role of using multiple inference rules for this task. We establish that such rules cannot be learned and applied in parallel, but that layered learning and reasoning are necessary.
@conference{Michael_2009_ReadingBetweenTheLines,
     author = {Loizos Michael},
     title = {{Reading Between the Lines}},
     booktitle = {Proceedings of the 21st International Joint Conference on Artificial Intelligence (IJCAI 2009)},
     pages = {1525--1530},
     month = {July},
     year = {2009},
}
A preprint version of this article is available locally for download.
A Substitution Theorem for Graceful Trees and its Applications
M. Mavronicolas and L. Michael
Discrete Mathematics, vol. 309 (12), pg. 3757–3766, 2009.
A graceful labeling of a graph G=(V,E) assigns |V| distinct integers from the set (0,...,|E|) to the vertices of G so that the absolute values of their differences on the |E| edges of G constitute the set (1,...,|E|). A graph is graceful if it admits a graceful labeling. The forty-year old Graceful Tree Conjecture, due to Ringel and Kotzig, states that every tree is graceful.
We prove a Substitution Theorem for graceful trees, which enables the construction of a larger graceful tree through combining smaller and not necessarily identical graceful trees. We present applications of the Substitution Theorem, which generalize earlier constructions combining smaller trees.
@article{Mavronicolas.etal_2009_GracefulTreeSubstitution,
     author = {Marios Mavronicolas and Loizos Michael},
     title = {{A Substitution Theorem for Graceful Trees and its Applications}},
     journal = {Discrete Mathematics},
     publisher = {Elsevier},
     volume = {309},
     number = {12},
     pages = {3757--3766},
     month = {June},
     year = {2009},
}
A preprint version of this article is available locally for download.
Ant-Based Computing
L. Michael
Artificial Life, vol. 15 (3), pg. 337–349, 2009.
[ Conference version in ECAL 2005; nominated for best paper award / received special commendation. ]
A biologically and physically plausible model for ants and pheromones is proposed. It is argued that the mechanisms described in this model are sufficiently powerful to reproduce the necessary components of universal computation. The claim is supported by illustrating the feasibility of designing arbitrary logic circuits, showing that the interactions of ants and pheromones lead to the expected behavior, and presenting computer simulation results to verify the circuits' working. The conclusions of this study can be taken as evidence that coherent deterministic and centralized computation can emerge from the collective behavior of simple distributed Markovian processes such as those followed by biological ants, but also, more generally, by artificial agents with limited computational and communication abilities.
@article{Michael_2009_AntBasedComputing,
     author = {Loizos Michael},
     title = {{Ant-Based Computing}},
     journal = {Artificial Life},
     publisher = {MIT Press},
     volume = {15},
     number = {3},
     pages = {337--349},
     month = {Summer},
     year = {2009},
}
A preprint version of this article is available locally for download.
Computing on a Partially Eponymous Ring
M. Mavronicolas, L. Michael, and P. G. Spirakis
Theoretical Computer Science, vol. 410 (6–7), pg. 595–613, 2009.
[ Invited to Special Issue 'Principles of Distributed Systems'. ]
We study the partially eponymous model of distributed computation, which simultaneously generalizes the anonymous and the eponymous models. In this model, processors have identities, which are neither necessarily all identical (as in the anonymous model) nor necessarily unique (as in the eponymous model). In a decision problem formalized as a relation, processors receive inputs and seek to reach outputs respecting the relation. We focus on the partially eponymous ring, and we shall consider the computation of circularly symmetric relations on it. We consider sets of rings where all rings in the set have the same multiset of identity multiplicities.
+ We distinguish between solvability and computability: in solvability, processors are required to always reach outputs respecting the relation; in computability, they must do so whenever this is possible, and must otherwise report impossibility.
– We present a topological characterization of solvability for a relation on a set of rings, which can be expressed as an efficiently checkable, number-theoretic predicate.
– We present a universal distributed algorithm for computing a relation on a set of rings; it runs any distributed algorithm for constructing views, followed by local steps.
+ We derive, as our main result, a universal upper bound on the message complexity to compute a relation on a set of rings; this bound demonstrates a graceful degradation with the Least Minimum Base, a parameter indicating the degree of least possible eponymity for a set of rings. Thereafter, we identify two cases where a relation can be computed on a set of rings, with rings of size n, with an efficient number of O(nlgn) messages.
@article{Mavronicolas.etal_2009_PartiallyEponymousRing,
     author = {Marios Mavronicolas and Loizos Michael and Paul G. Spirakis},
     title = {{Computing on a Partially Eponymous Ring}},
     journal = {Theoretical Computer Science},
     publisher = {Elsevier},
     volume = {410},
     number = {6--7},
     pages = {595--613},
     month = {February},
     year = {2009},
}
A preprint version of this article is available locally for download.
A First Experimental Demonstration of Massive Knowledge Infusion
L. Michael and L. G. Valiant
11th International Conference on Principles of Knowledge Representation and Reasoning (KR 2008), 2008.
A central goal of Artificial Intelligence is to create systems that embody commonsense knowledge in a reliable enough form that it can be used for reasoning in novel situations. Knowledge Infusion is an approach to this problem in which the commonsense knowledge is acquired by learning. In this paper we report on experiments on a corpus of a half million sentences of natural language text that test whether commonsense knowledge can be usefully acquired through this approach.
We examine the task of predicting a deleted word from the remainder of a sentence for some 268 target words. As baseline we consider how well this task can be performed using learned rules based on the words within a fixed distance of the target word and their parts of speech. This captures an approach that has been previously demonstrated to be highly successful for a variety of natural language tasks. We then go on to learn from the corpus rules that embody commonsense knowledge, additional to the knowledge used in the baseline case. We show that chaining learned commonsense rules together leads to measurable improvements in prediction performance on our task as compared with the baseline.
This is apparently the first experimental demonstration that commonsense knowledge can be learned from natural inputs on a massive scale reliably enough that chaining the learned rules is efficacious for reasoning.
@conference{Michael.etal_2008_ExperimentalKI,
     author = {Loizos Michael and Leslie G. Valiant},
     title = {{A First Experimental Demonstration of Massive Knowledge Infusion}},
     booktitle = {Proceedings of the 11th International Conference on Principles of Knowledge Representation and Reasoning (KR 2008)},
     publisher = {AAAI Press},
     pages = {378--389},
     month = {September},
     year = {2008},
}
A preprint version of this article is available locally for download.
Autodidactic Learning and Reasoning
L. Michael
Harvard University, 2008.
[ Advisor: Leslie Valiant ]
The formal study of intelligence has largely focused on learning and reasoning, the processes by which knowledge is, respectively, acquired and applied. This dissertation investigates how the two processes may be undertaken together in an autodidactic, or self-taught, manner. The thesis put forward is that the development of such a unified framework rests on the principled understanding of a third process, that of sensing.
Sensing is formalized in this dissertation as the process by which some underlying reality completely specifying a state of affairs is mapped to an appearance explicitly offering only partial information. Learning is employed to discover the structure of the reality, and reasoning is employed to recover as much of the missing information as possible. Emphasis is placed on the tractability of learning and reasoning, and on the existence of formal guarantees on the accuracy of the information recovered, making only minimal assumptions on the nature of information loss during the sensing phase.
An investigation of the conditions under which the task of information recovery is feasible is undertaken. It is shown that it suffices, and is optimal in some precisely defined sense, to induce rules that are simply consistent with the observed appearances. For environments with structure expressible via monotone rules, learning consistently from partial appearances reduces to learning from complete appearances, allowing for known positive results to be lifted to the case of autodidactic learning. On the negative side, there exist environments where partial appearances compromise learnability.
The contribution of chaining rules — induced or externally provided ones — for information recovery is then examined, and is shown to be that of increasing the combined predictive soundness and completeness. This result provides apparently the first formal separation between multi-layered and single-layered reasoning in this context.
It is further established that the learning and reasoning processes cannot be completely decoupled in the autodidactic setting. Instead, an approach that interleaves the two processes is introduced, which proceeds by learning the rules to be employed for multi-layered reasoning in an iterative manner, one layer at a time. This approach of employing interim reasoning, or reasoning while learning, is shown to suffice and to be a universal approach for the induction of knowledge that is to be reasoned with.
The design and implementation of a system for automatically acquiring and manipulating knowledge is finally considered. Semantic information extracted from a natural language text corpus is interpreted, following the theory, as partial information about the real world. It is argued that rules induced from such information capture some commonsense knowledge. This knowledge is subsequently employed to recover information that is not explicitly stated in the corpus. Experiments were performed on a massive scale, and serious computational challenges had to be addressed to ensure scalability. The experimental setting was designed with the novel goal of detecting whether commonsense knowledge has been extracted. The experimental results presented suggest that this goal has been achieved to a measurable degree.
@phdthesis{Michael_2008_AutodidacticLearningAndReasoning,
     author = {Loizos Michael},
     title = {{Autodidactic Learning and Reasoning}},
     school = {Harvard University},
     month = {May},
     year = {2008},
}
The Price of Defense
M. Mavronicolas, L. Michael, V. G. Papadopoulou, A. Philippou, and P. G. Spirakis
31st International Symposium on Mathematical Foundations of Computer Science (MFCS 2006), 2006.
We consider a strategic game with two classes of confronting randomized players on a graph G(V, E): v attackers, each choosing vertices and wishing to minimize the probability of being caught, and a defender, who chooses edges and gains the expected number of attackers it catches. The Price of Defense is the worst-case ratio, over all Nash equilibria, of the optimal gain of the defender over its gain at a Nash equilibrium. We provide a comprehensive collection of trade-offs between the Price of Defense and the computational efficiency of Nash equilibria.
– Through reduction to a Two-Players, Constant-Sum Game, we prove that a Nash equilibrium can be computed in polynomial time. The reduction does not provide any apparent guarantees on the Price of Defense.

– To obtain such, we analyze several structured Nash equilibria:

– In a Matching Nash equilibrium, the support of the defender is an Edge Cover. We prove that they can be computed in polynomial time, and they incur a Price of Defense of G), the Independence Number of G.
– In a Perfect Matching Nash equilibrium, the support of the defender is a Perfect Matching. We prove that they can be computed in polynomial time, and they incur a Price of Defense of |V|/2 .
– In a Defender Uniform Nash equilibrium, the defender chooses uniformly each edge in its support. We prove that they incur a Price of Defense falling between those for Matching and Perfect Matching Nash Equilibria; however, it is NP-complete to decide their existence.
– In an Attacker Symmetric and Uniform Nash equilibrium, all attackers have a common support on which each uses a uniform distribution. We prove that they can be computed in polynomial time and incur a Price of Defense of either |V|/2 or G).

@conference{Mavronicolas.etal_2006_PriceOfDefense,
     author = {Marios Mavronicolas and Loizos Michael and Vicky G. Papadopoulou and Anna Philippou and Paul G. Spirakis},
     title = {{The Price of Defense}},
     booktitle = {Proceedings of the 31st International Symposium on Mathematical Foundations of Computer Science (MFCS 2006)},
     publisher = {Springer},
     volume = {4162},
     pages = {717--728},
     month = {August--September},
     year = {2006},
}
A preprint version of this article is available locally for download.
TBBL: A Tree-Based Bidding Language for Iterative Combinatorial Exchanges
R. Cavallo, D. C. Parkes, A. I. Juda, A. Kirsch, A. Kulesza, S. Lahaie, B. Lubin, L. Michael, and J. Shneidman
(IJCAI 2005) 1st Multidisciplinary Workshop on Advances in Preference Handling (M-Pref 2005), 2005.
We present a novel tree-based logical bidding language, TBBL, for preference elicitation in combinatorial exchanges (CEs). TBBL provides new expressiveness for two-sided markets with agents that are both buying and selling goods. Moreover, the rich semantics of TBBL allow the language to capture new structure, making it exponentially more concise than OR* and LGB for preferences that are realistic in important domains for CEs. With simple extensions TBBL can subsume these earlier languages. TBBL can also explicitly represent partial information about valuations. The language is designed such that the structure in TBBL bids can be concisely captured directly in mixed-integer programs for the allocation problem. We illustrate TBBL through examples drawn from domains to which it can be (and is being) applied, and motivate further extensions we are currently pursuing.
@conference{Cavallo.etal_2005_TBBLforICE,
     author = {Ruggiero Cavallo and David C. Parkes and Adam I. Juda and Adam Kirsch and Alex Kulesza and S\'ebastien Lahaie and Benjamin Lubin and Loizos Michael and Jeffrey Shneidman},
     title = {{TBBL: A Tree-Based Bidding Language for Iterative Combinatorial Exchanges}},
     booktitle = {Working notes of the (IJCAI 2005) 1st Multidisciplinary Workshop on Advances in Preference Handling (M-Pref 2005)},
     month = {July--August},
     year = {2005},
}
A preprint version of this article is available locally for download.
ICE: An Iterative Combinatorial Exchange
D. C. Parkes, R. Cavallo, N. Elprin, A. I. Juda, S. Lahaie, B. Lubin, L. Michael, J. Shneidman, and H. Sultan
6th ACM Conference on Electronic Commerce (EC 2005), 2005.
We present the first design for a fully expressive iterative combinatorial exchange (ICE). The exchange incorporates a tree-based bidding language that is concise and expressive for CEs. Bidders specify lower and upper bounds on their value for different trades. These bounds allow price discovery and useful preference elicitation in early rounds, and allow termination with an efficient trade despite partial information on bidder valuations. All computation in the exchange is carefully optimized to exploit the structure of the bid-trees and to avoid enumerating trades. A proxied interpretation of a revealed-preference activity rule ensures progress across rounds. A VCG-based payment scheme that has been shown to mitigate opportunities for bargaining and strategic behavior is used to determine final payments. The exchange is fully implemented and in a validation phase.
@conference{Parkes.etal_2005_ICE,
     author = {David C. Parkes and Ruggiero Cavallo and Nick Elprin and Adam I. Juda and S\'ebastien Lahaie and Benjamin Lubin and Loizos Michael and Jeffrey Shneidman and Hassan Sultan},
     title = {{ICE: An Iterative Combinatorial Exchange}},
     booktitle = {Proceedings of the 6th ACM Conference on Electronic Commerce (EC 2005)},
     pages = {249--258},
     month = {June},
     year = {2005},
}
A preprint version of this article is available locally for download.
Reasoning about Actions and Change in Answer Set Programming
Y. Dimopoulos, A. C. Kakas, and L. Michael
7th International Conference on Logic Programming and Nonmonotonic Reasoning (LPNMR 2004), 2004.
This paper studies computational issues related to the problem of reasoning about actions and change (RAC) by exploiting its link with the Answer Set Programming paradigm. It investigates how increasing the expressiveness of a RAC formalism so that it can capture the three major problems of frame, ramification and qualification, affects its computational complexity, and how a solution to these problems can be implemented within Answer Set Programming. Our study is carried out within the particular language E. It establishes a link between language E and Answer Set Programming by presenting encodings of different versions of this language into logic programs under the answer set semantics. This provides a computational realization of solutions to problems related to reasoning about actions and change, that can make use of the recent development of effctive systems for Answer Set Programming.
@conference{Dimopoulos.etal_2004_RACinASP,
     author = {Yannis Dimopoulos and Antonis C. Kakas and Loizos Michael},
     title = {{Reasoning about Actions and Change in Answer Set Programming}},
     booktitle = {Proceedings of the 7th International Conference on Logic Programming and Nonmonotonic Reasoning (LPNMR 2004)},
     pages = {61--73},
     month = {January},
     year = {2004},
}
A preprint version of this article is available locally for download.


Last updated: 09 Jan. 2017 Powered by JabRef.