How should autonomous systems be analyzed? In particular, we describe how the confluence of developments in two areas—autonomous systems architectures and formal verification for rational agents—can provide the basis for the formal verification of autonomous systems behaviors.
New research shows that AJPF (Agent Java Path Finder) can be used to formally verify a rational, autonomous agent written in GWENDOLEN (a derivative of AgentSpeak) so that it always performs correct reasoning with respect to its available actions, goals, and beliefs.
The approach in this paper sounds non-scalable at first because it achieves these guarantees through an exhaustive model-checker. However, if an autonomous agent of this type is architected as a multi-agent system composed of small enough modules, the modules can be model checked separately and the formal guarantees of its correct reasoning can be achieved using standard assume-guarantee or rely-guarantee approaches.
The authors also note that their approach is missing at least three fundamentally desirable criteria:
1) Having the agent take into account how accurate and reliable the system’s beliefs are about its environment. Then using this uncertainty information to move from strict true/false beliefs to more complex distributions of beliefs.
2) Given these more complex beliefs, modifying the guarantees to only verify “correct intentions” (i.e., the agent carries out the best action available to it weighted by these belief distributions).
3) The more subtle and difficult problem of proper goal acquisition, which remains an active area of research at the Machine Intelligence Research Institute.
See on commonsenseatheism.com