as to produce a more tractable analysis. Within social sciences circles the term is often equated with a research design in which an additional attribute obtains: Cases are randomized across treatment and control groups. Antonym: observational.
External validity: See Validity.
Internal validity: See Validity.
N: See Observation.
Observation: The most basic element of any empirical endeavor. Any piece of evidence enlisted to support a proposition. Conventionally, the number of observations in an analysis is referred to by the letter N. Confusingly, N is also used to refer to the number of cases.
Randomization: A process by which cases in a sample are chosen randomly (with respect to some subject of interest). An essential element for experiments that use control groups since the treatment and control groups, prior to treatment, must be similar in all respects that are relevant to the inference, and the easiest way to achieve this is through random selection. Sometimes, randomization occurs across matched pairs or within substrata of the sample (stratified random sampling), rather than across the entire population.
Research design: The way in which empirical evidence is brought to bear on a hypothesis.
Treatment: See Experiment.
Validity: Internal validity refers to the correctness of a hypothesis with respect to the sample (the cases actually studied by the researcher). External validity refers to the correctness of a hypothesis with respect to the population of an inference (cases not studied but that the inference is thought to explain). The key element of external validity thus rests on the representativeness of a sample—that is, its relative bias.
Variable: An attribute of an observation or a set of observations. In the analysis of causal relations, variables are understood either as independent (explanatory or exogenous), denoted X, or as dependent (endogenous), denoted Y.
Within-case analysis: See Case study.
X: See Variable.
Y: See Variable.