Analogies play multiple roles in social interactions. For example, analogies enable collaborative understanding of problems, generation of solutions for the problems, analysis of the solutions, and explanation of the reasoning. However, analogical reasoning by itself provides few guarantees of logical correctness. Thus, evaluation is an important task in making good analogies. We propose an analogy evaluation mechanism that we call social analogy evaluation where social interaction helps humans evaluate the analogies they make. In social analogy evaluation, the analogical reasoner interacts with one or more external evaluators to receive feedback about his or her analogy. We hypothesize that social interaction could be a powerful mechanism for evaluating analogies because different agents in a social interaction may have different knowledge, different reasoning, and/or different goals. These differences should enable the agents to provide critiques of an analogy from different perspectives, thereby collectively evaluating the analogy more broadly than if only a single agent were to perform the evaluation. This hypothesis suggests a potentially important role for computing: as an external evaluator. A computer system can bring its own perspective to the task, acting as a kind of artificial teammate to help human analogical reasoners evaluate their analogies. We are presently constructing such an artificial teammate, which we call ArTe. ArTe will evaluate analogies in the context of biologically inspired design and will output its evaluations for human consumption.