... formally.1.1
Sometimes the term epistemic logic'' is reserved for the logic of knowledge, and doxastic logic'' is used to denote that of belief. In the present thesis we shall use the term epistemic logic'' in the wider sense.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

... attitudes.1.2
See [FG97] for a recent discussion of the agent concept. McCarthy ([McC79]) discusses the problem of ascribing human-like qualities to artificial entities.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

... reasoning1.3
Some authors use the term bounded rationality'' to express the idea that an agent cannot compute everything he could if his resources were unlimited. That term is somewhat misleading, so I shall use resource boundedness'' throughout the thesis.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

... formulae.1.4
We avoid using the word intensional'' because epistemic concepts are not intensional in the sense of Carnap [Car47]. Those concepts are -- as Cresswell pointed out ([Cre80] -- hyper-intensional.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

...''2.1
The truth values of epistemic statements also depend on other parameters such as time, location, context. However, it is a common practice in epistemic logic to take only agents into consideration and to assume certain standard values for the other parameters, i.e., the sentences are interpreted relative to the current'' situation. If only one agent is considered then even the reference to the agent is omitted.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

... belief.2.2
It should be noted, however, that in AI terminology, no sharp distinction between knowledge and belief as in philosophy is made: knowledge is not required to be true. Unless stated otherwise I shall follow this terminology and use the term knowledge'' in the wider sense.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

... validity2.3
Some formulae containing the knowledge operators are always valid, but they are not genuine epistemic statements: a formula like does not say anything about an agent's reasoning capacities. Counterexamples to the commonly assumed epistemic closure principles are presented in [Len78], [Hal95], among others.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

... reasoning2.4
If the possible-worlds semantics is adapted for modeling motivational concepts such as goals, desires, or intentions, then the resulting logics suffer from a similar problem: if an agent intends to do something then he intends all logical consequences of his intention. This is not a desirable property: one might intends to go to the dentist without having an intention of suffering pain, although the latter is a necessary consequence of the former. This problem is known as the side-effect problem (cf. [Bra90]).
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

... consequence3.1
Normal modal logics are systems which are closed under the (knowledge) necessitation rule (NEC), and monotonic modal logics are closed under the monotony rule (MON). In the context of modal logic, the term monotonic'' means that the rule (MON) holds. This usage should not be confused with the terminology of non-monotonic reasoning research. Consult appendix A for a brief overview of modal systems weaker than the minimal normal modal logic K.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

... knowledge3.2
Strictly speaking, Levesque's logic of explicit'' belief ([Lev84]) still describes a kind of implicit belief, because what is defined to be explicit belief of an agent in that model is not immediately available to the agent. The same criticism applies to other models which intend to model explicit belief but still fall prey to some form of logical omniscience, e.g., Konolige's deduction model [Kon86], or the notion of algorithmic knowledge of Halpern et. al. ([HMV94]).
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

....4.1
Instead of we could also introduce a binary operator with the interpretation in a state where the agent knows , after the application of the rule he may know ''. However, the former notation is closer to that of dynamic logic, whereas the latter one does not offer any obvious advantage.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

... respectively.4.2
In dynamic logic another form of iteration is considered, viz. the one that allows for running a program zero time, denoted by . But one can easily extend dynamic logic to include non-zero iteration as well.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

... actions4.3
One may ask how seriously one can take introspection as action. Well, it is true that introspection may differ from the genuine'' reasoning actions in some aspects. However, the differences are not quite significant. It seems reasonable to treat introspection as test of a certain kind, which is used by the agents to reason about their own mental state.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

... constraints5.1
Among the relevant resources, time is the most important one, so we shall focus on that factor and try to model time constraints. The other resources can be modeled in a similar manner, as I shall show later.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

... units5.2
The sentence agent needs time units to compute '' does not imply that will know at time , where is the current time. If the agent is not asked to provide the information , then he has no reason to waste his resources in order to find a useless answer. The aspect of goal-directedness is implicit in our concept of knowledge.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

... knows5.3
An agent may in fact have some relevance criteria to narrow down the search space, so he actually tries to infer from the relevant part of his knowledge. However, it is typically not possible to restrict the attention to the formula , because the knowledge that can be derived from the intermediate results can usually be obtained only after a proof has been constructed.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

... class5.4
The computation speed of an agent depends on several factors, e.g., the number of inferences he can perform per time unit, the quality of his algorithms, his ability to classify problems and to select suitable algorithms to solve certain problems, etc. One a time frame has been fixed (i.e., when time units are defined,) the speed can be determined empirically.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.