In Jones and Goel (2012), we describe a meta-reasoning architecture that uses abstraction networks (ANs) and empirical verification procedures (EVPs) to ground self-diagnosis and self-repair of domain knowledge in perception. In particular, we showed that when a hierarchical classifier organized as an AN makes an incorrect prediction, then meta-reasoning can help diagnose and repair the semantics of the concepts in the network. Further, we demonstrated that if an EVP associated with each concept in the network can verify the semantics of that concept at diagnosis time, then the meta-reasoner can perform knowledge diagnosis and repair tractably. In this article, we report on three additional results on the use of perceptually grounded meta-reasoning for correcting prediction errors. Firstly, a new theoretical analysis indicates that the meta-reasoning diagnostic procedure is optimal and establishes the knowledge conditions under which the learning converges. Secondly, an empirical study indicates that the EVPs themselves can be adapted through refining the conceptual semantics. Thirdly, another empirical study shows that if EVPs cannot be defined for all concepts in a hierarchy, the computational technique degrades gracefully. While the theoretical analysis provides a deeper explanation of the sources of power in ANs, the two empirical studies demonstrate ways in which the strong assumptions made by ANs in their most basic form can be relaxed.