Properties play a central role in most theories of conceptual knowledge. Since computational models derived from word co-occurrence statistics have been claimed to provide a natural basis for semantic representations, the question arises of whether such models are capable of producing reasonable property-based descriptions of concepts, and whether these descriptions are similar to those elicited from humans. This article presents a qualitative analysis of the properties generated by humans in two different settings, as well as those produced, for the same concepts, by two computational models. In order to find high-level generalizations, the analysis is conducted in terms of property types, i.e., categorizing properties into classes such as functional and taxonomic properties. We discover that differences and similarities among models cut across the human/computational distinction, suggesting on the one hand caution in making broad generalizations, e.g., about “grounded” and “amodal” approaches, and, on the other, that different models might reveal different facets of meaning, and thus they should rather be integrated than seen as rival ways to get at the same information.