**UPDATE: Paper Submission Deadline Extended -- April 20th**Workshop Overview
The last few decades have witnessed a lively debate on whether visual mental representations are a real part of human cognition. Likewise, AI systems have varied in how heavily they rely on visual systems of knowledge representation, from purely proposition-based production systems that contain no explicit visual reasoning to systems that use overt models of visual knowledge. Advances in this area may enable more extensive autonomous reasoning in visual domains, foster deeper computational support for and understanding of human problem solving, modeling, and design, and improve human-machine interaction through more intense and effective use of visual representations. Drawing participants from diverse research communities such as AI, HCI, cognitive science, learning science, and design science, this interdisciplinary workshop aims to describe and discuss the latest scientific research that may inform and influence progress towards these goals.
Topics for this workshop include, but are not limited to:
- Cognitive architectures
- Comparisons of visual and propositional approaches
- Diagrammatic reasoning
- Educational theory, technology, and practice
- Formal theories of visual representation
- High-level perception
- Mental images in cognition
- Multi-modal representations and reasoning
- Sketch understanding
- Spatial representations and reasoning
- Visual media theory and applications
- Visual representations and mental models
- Visual representations in creativity and design
- Visual representations in human culture
- Visual similarity and analogy
While the relevance and potential impact of visual representations and reasoning cross many sub-disciplines of AI, research in this area often revolves around several central questions, such as:
- What makes a representation visual?
- How can the use of visual representations and reasoning improve the performance of an agent, and what specific properties of the task (and of the agent) enable this improvement?
- Is visual reasoning required for certain tasks?
- What role do visual representations play in intelligence?
- How are visual representations related to perception?
- How can propositional representations be extracted from visual ones, and vice versa? How can an agentís usage of propositional and visual represenations be blended seamlessly?