Below is the uncorrected machine-read text of this chapter, intended to provide our own search engines and external engines with highly rich, chapter-representative searchable text of each book. Because it is UNCORRECTED material, please consider the following text as a useful but insufficient proxy for the authoritative book pages.
SEEING CONSERVED SIGNALS: USING ALGORITHMS TO DETECT SIMILARITIES BETWEEN BIOSEQUENCES 70 dot plots give a meaningful visualization of all the similarities between segments in a single snapshot and are ubiquitous. VARIATIONS ON SEQUENCE COMPARISON In this section a number of the most important variations on sequence comparison are examined. The survey is by no means exhaustive. Variations in Gap Cost Penalties How to assign scores to alignment gaps has always been more problematic than scoring aligned symbols, because the statistical effect of gaps is not well understood (see Chapter 4). Nature frequently deletes or inserts entire substrings as a unit, as opposed to individual polymer Figure 3.7 Dot plot of somatotropin alignments.
SEEING CONSERVED SIGNALS: USING ALGORITHMS TO DETECT SIMILARITIES BETWEEN BIOSEQUENCES 71 elements. It is thus natural to think of cost models in which the score of a gap is not just the sum of scores assigned to the individual symbols in the gap, as was used in the previous two sections, but rather a more general function, gap(x), of its length x. For example, it is common to score a gap according to the affine function gap(x) = r + sx, where r > 0 is the penalty for the introduction of the gap and s > 0 is the penalty for each symbol in the gap. Such affine gap costs are particularly important when comparing proteins. For example, a gap penalty of 8 + 4x works well in conjunction with the aligned symbol scores of Figure 3.5. Because a gap is viewed as detracting from similarity, its score is a penalty that is subtracted from the total. Accommodating affine gap scores involves the following variation on the central recurrence (Gotoh, 1982). For each subproblem, Ai versus Bj, one develops recurrences for (1) the best alignment that ends with an A-gap, Ag(i, j), (2) the best alignment that ends with a B-gap, Bg(i, j), and (3) the best overall alignment, S(i ,j). This leads to the following system of recurrence equations: Ag(i,j) = max{Ag(iâ 1,j)â s,S(iâ 1,j)â (r + s)} Bg(i,j) = max{Bg(i, j â 1)â s,S(i, j â l)â(r + s)} S(i,j) = max{S(i â 1, j 1)+(ai, b),Ag(i,j),Bg(i,j)} . S terms contributing to an Ag or Bg value are penalized r + s because a gap is being initiated from that term. Ag terms contributing to Ag values and Bg terms contributing to Bg values are penalized only s because the gap is just being extended. An algorithm that applies these recurrences at each (i,j) leads to an O(MN) time algorithm for global alignments with affine gap costs. Simply adding a 0 term to the S-recurrence gives an algorithm for local alignments with affine gap costs. Summation and affine functions are not the only options available for scoring gaps. The gap cost function gap(x) can be taken to be a concave (flat or cupped downward) function of length, that is, a function such that gap(x + 1)â gap(x)⤠gap(x)â gap(x â1) for all x > 0. The class of concave gap cost functions includes affine functions but is much wider than just affine functions. For example, for positive a and b, the function gap(x) = a logx + b is a concave function that finds occasional use in