Proceedings of the IFSR Conversation 2012, St. Magdalena, Linz, Austria
An Empirical Taxonomy of Modeling Approaches
© 1990; excerpt reprinted with permission
People who work with models at the metalevel—whether to develop knowledge bases or to design tools that help people do modeling—require a systematic, unified framework within which they can operate. This framework should encompass informal as well as formal, and qualitative as well as quantitative, modeling approaches. It should facilitate the development of systems such as modeling resources that can support a user throughout the modeling process, from the earliest exploratory stages through the highest levels of specialized analytic techniques.
The use of knowledge requires the use of models, but we lack a coherent understanding of the relationship between the two. There is no general agreement on what a model is. To many scientists, the word “model” refers specifically to a computer simulation. To a mathematician, a model is a system of equations. To a logician, a model of a formula of a language is an interpretation of the language for which the formula comes out true. According to the O.E.D., a model is “a representation of structure” or “something that accurately resembles something else.”
Standards of terminology, method and evaluation criteria for modeling are well developed within certain narrow domains. These domains are determined by such factors as formal characteristics of the model (linear v. nonlinear, etc.), domain of application (econometric v. ecological, etc.), and techniques of analysis (regression v. linear programming, etc.). Yet there are no standards that hold across these domains, and there is no framework that indicates how the standards that do exist relate to one another.
It has not been possible to bridge the gulfs between these domains to construct a unified framework that showed the relationships between the formal, functional, structural and behavioral characteristics of models and modeling approaches.
A unifying framework can be provided by seeing models as the results of the modeling activities of situated rational agents. The formal characteristics of the models, their domains of application, and the techniques used to analyze them are then seen to be the results of decisions made by the agent during the modeling process (consciously or not). The presence or absence of these features is therefore dependent on the factors which influenced those modeling decisions. The fundamental basis of a comprehensive and unified taxonomy of modeling approaches across domains should be those features of situation, motivation, and resource constraints which influence modeling decisions.
Questions To Be Considered
- What is a model?
- Are all representations models?
- Is a measurement a model?
- Is a metaphor a model?
- What are the syntax, semantics, and pragmatics of modeling?
- Are the syntax, semantics and pragmatics of modeling necessarily domain dependent, or can they be defined in a way that is (for all practical purposes) domain invariant?
- How are speech acts related to modeling acts?
- How is knowledge related to models?
- Is it possible to use knowledge without relying on the context of one or more models (e.g., a “world” model)?
- Does knowledge originate in any way other than through modeling?
- Does a collection of facts and rules in a knowledge base somehow “induce” one or more models?
- How are exact models related to fuzzy models?
- How are formal models related to informal models?
- How are implicit models related to explicit models?
- Is an explicit model always understood within the context of one or more implicit models— within a hierarchy of nested models?
- Under what conditions of situation, motivation and resource constraints do people generate and use models?
- What phenomena do they model?
- What are their motivations and objectives—implicit as well as explicit?
- What kinds of tradeoffs do they make to meet their objectives?
- What language do they use to discuss modeling?
- What kinds of representations do they use?
- What are their methods for model generation?
- What are their methods for model transformation and analysis?
- How do they use the results of their modeling activities?
- What are the decisions/choices that they make during modeling and what are the criteria that they use to make these decisions?
- What are the criteria they use to evaluate the results of their work?
- How do the evaluation criteria identified in the taxonomy (accuracy, reliability, maintainability, efficiency, usefulness, controllability, observability, robustness, stability, sensitivity, specificity, significance, etc.) relate to features of situation, motivation, and resource constraints?
- How does the model of modeling behavior implicit in the taxonomy evaluate as a resource for designing modeling tools and knowledge bases according to the above evaluation criteria?
- What are the consequences of different kinds of errors (sampling, sample design, biased measurement, non-conformable measurement, data handling, classification, formulation, logical, procedural, random, deliberate, etc.) for the types of models identified?
- Under what conditions is it meaningful/useful to use the output of one model as the input to another?
- Under what conditions is it meaningful/useful to use intermediate results from one model as the input to another?
- What kinds of conditions/assumptions make models incompatible or compatible?
- How do the results of this study relate to current controversies in statistical meta analysis?
- If the design of a knowledge base entails the design of implicit models, what design criteria should be followed to ensure that these models are optimally suited to the intended uses (and users) of the knowledge base?
- What information regarding the origin of a particular item of knowledge should be encoded in a knowledge base to ensure that if it is used in modeling, the kinds of errors for which that modeling activity has low tolerance are not compounded?
- What information regarding the origin of a particular item of knowledge should be encoded in a knowledge base to allow for the optimal intelligent use of the knowledge base, and how should users be trained to this end?
- When is knowledge most robust to variation in modeling conditions and how can knowledge representations be designed to enhance this robustness?
- How can resources be developed to help users make good modeling decisions in all types of situations?