2
Bone Health and Risk Factors

The most important modifiable risk factors associated with bone mass are hormonal status, physical activity, and nutrition. Nonmodifiable factors, such as hereditary and familial factors, have a strong influence on bones, especially on the spine and less so on the femoral neck (Kelly and Eisman, 1992; Krall and Dawson-Hughes, 1993; Sambrook et al., 1993). The impact of these risk factors extends throughout an individual's life.

Individuals are thought to attain their peak bone mass in the age range of military recruits, although the timing may vary by skeletal site. For example, peak bone mass in the hip is achieved by age 16 years (Weaver, 1997). In a group of females aged 11 to 32 years, 92 percent of the maximal total-body bone mass observed was present by age 18, and 99 percent was present by age 26 (Teegarden et al., 1995). Bone mass, which may continue to accumulate upto age 30, may be adversely affected by inadequate calcium intakes during adolescence and early adulthood. Achievement of an insufficient peak bone mass is considered one of the major risk factors for subsequent development of osteoporosis (Lu et al., 1994; Recker et al., 1992; Teegarden et al., 1995). Among young adults, skeletal mass is generally greater in males than in females, and greater in blacks than in whites. In addition, bone loss begins at an earlier age in females than in males, and although onset of loss in females begins before menopause, the rate of skeletal loss in some females is distinctly accelarated the first five years after menopause (Dawson-Hughes et al., 1990). Consequently, net skeletal mass is lowest in the menopausal white female, which increases the predisposition to skeletal fracture.

It is unclear whether increasing calcium intake in children and adolescents will have an impact on peak bone density or future risk of fracture. However, it has been demonstrated that



The National Academies | 500 Fifth St. N.W. | Washington, D.C. 20001
Copyright © National Academy of Sciences. All rights reserved.
Terms of Use and Privacy Statement



Below are the first 10 and last 10 pages of uncorrected machine-read text (when available) of this chapter, followed by the top 30 algorithmically extracted key phrases from the chapter as a whole.
Intended to provide our own search engines and external engines with highly rich, chapter-representative searchable text on the opening pages of each chapter. Because it is UNCORRECTED material, please consider the following text as a useful but insufficient proxy for the authoritative book pages.

Do not use for reproduction, copying, pasting, or reading; exclusively for search engines.

OCR for page 29
2 Bone Health and Risk Factors The most important modifiable risk factors associated with bone mass are hormonal status, physical activity, and nutrition. Nonmodifiable factors, such as hereditary and familial factors, have a strong influence on bones, especially on the spine and less so on the femoral neck (Kelly and Eisman, 1992; Krall and Dawson-Hughes, 1993; Sambrook et al., 1993). The impact of these risk factors extends throughout an individual's life. Individuals are thought to attain their peak bone mass in the age range of military recruits, although the timing may vary by skeletal site. For example, peak bone mass in the hip is achieved by age 16 years (Weaver, 1997). In a group of females aged 11 to 32 years, 92 percent of the maximal total-body bone mass observed was present by age 18, and 99 percent was present by age 26 (Teegarden et al., 1995). Bone mass, which may continue to accumulate upto age 30, may be adversely affected by inadequate calcium intakes during adolescence and early adulthood. Achievement of an insufficient peak bone mass is considered one of the major risk factors for subsequent development of osteoporosis (Lu et al., 1994; Recker et al., 1992; Teegarden et al., 1995). Among young adults, skeletal mass is generally greater in males than in females, and greater in blacks than in whites. In addition, bone loss begins at an earlier age in females than in males, and although onset of loss in females begins before menopause, the rate of skeletal loss in some females is distinctly accelarated the first five years after menopause (Dawson-Hughes et al., 1990). Consequently, net skeletal mass is lowest in the menopausal white female, which increases the predisposition to skeletal fracture. It is unclear whether increasing calcium intake in children and adolescents will have an impact on peak bone density or future risk of fracture. However, it has been demonstrated that

OCR for page 29
increasing calcium intake from food sources (from 900 to 1500 mg daily) prevents bone loss from the spine in premenopausal women (Baran et al., 1990) BONE MINERAL DENSITY Peak bone mass is most often measured and referred to as bone mineral density (BMD, g/cm2)1 because of the strong correlation between BMD and bone strength (Bouxsein and Marcus, 1994; Moro et al., 1995). Many studies have found associations between low BMD and history of previous fracture (Gluer et al., 1996; Honkanen et al., 1991). However, these studies are difficult to interpret because immobilization and inactivity following a fracture may have contributed to the reduced BMD as noted by several investigators (Henderson et al., 1992; Kannus et al., 1994). Those individuals with a tibial fracture who were immobilized for longer periods (mean of 27 weeks) had a deficiency in bone density at a mean of 9 years following the fracture (Kannus et al., 1994). Thus, it is not clear whether low BMD led to the fracture or resulted from the fracture. Prospective studies have also found a relationship between increased fracture risk and low BMD (Kelsey et al., 1992). Low bone mineral content (BMC) or BMD as measured by either procedure involving radiation (such as dual energy x-ray absorptiometry [DXA], single photon absorptiometry [SPA], or radiographic absorptiometry) or ultrasound is significantly associated with an increased risk of future fracture in older women as illustrated in a recent meta-analysis (Marshall et al., 1996). The majority of studies of fracture risk and BMC have been completed in older, often postmenopausal, women; few studies have investigated young adults. The number of published reports comparing the relationship of BMD to the incidence of stress fractures is comparatively fewer than for complete fractures. In a 12-mo prospective study, stress fracture occurrence was associated with low BMD in female, but not male, athletes (Bennell et al., 1996). A longitudinal study conducted in 1,319 recruits at the Naval Health Research Center in conjunction with Johns Hopkins University found a lower BMD of the tibia and femur (using DXA) in recruits who sustained stress fractures (n = 59) during basic training compared with those recruits without stress fractures (Beck, 1997; Shaffer, 1997). Bone strength index2 also was found to be lower in recruits with fracture compared with those without fracture. Another important predictor of fracture risk is the rate at which bone is being lost (Beck, 1997). The faster bone is lost, the more likely it is that undue mechanical stress can lead to fracture. The rate at which bone is lost is, in itself, an independent risk for fracture, as documented by studies of states of rapid bone loss such as those that occur after organ transplantation (Shane et al., 1997). 1   Bone mineral density is bone mineral content, which is the amount of mineral at a particular skeletal site, divided by the area of the scanned region. BMD and bone mineral content are both measured by a variety of related technologies including dual energy x-ray absorptiometry. 2   Bone strength index is based on the observation that the resistance of a bone to bending and twisting is directly related to the section modulus as measured by DXA and inversely related to bone length (Selker and Carter, 1989).

OCR for page 29
Kimmel and coworkers enrolled 3,300 female soldiers in a prospective study designed to determine predictors of stress fracture during basic combat training. Broadband ultrasound attenuation (BUA) of the bone (measurement using ultrasound techniques) was measured at baseline in all female soldiers and at the time of initial presentation with possible fracture. There were 338 soldiers with stress fractures. Fitness level at baseline, smoking history, and BUA were independent predictors of stress fracture risk, and those soldiers who developed fractures were found to have a greater decrease in BUA earlier in basic training than did soldiers who did not develop fractures. This decline in BUA during basic training indicates that soldiers respond to the additional bone stresses during this period with increased bone turnover, which causes transiently downward adjustment in bone strength due to an expansion of the remodeling space (Kimmel, 1997). Although BUA has been found to predict fracture risk in a group of individuals, current methodology cannot adequately assess individual risk for stress fracture due to its low sensitivity and specificity. TECHNICAL MEASUREMENTS As noted earlier, it is important to correlate results from bone imaging with clinical symptoms when diagnosing stress fractures. Other measures and techniques may be useful in the prediction of fracture risk, including the measurement of bone markers and bone mass. Bone Markers Bone markers are useful as indices of the dynamic processes of bone formation (osteocalcin, skeletal alkaline phosphatase, and type 1 procollagen carboxy-terminal extension peptide) and bone resorption (urinary hydroxyproline, total pyridinolines, and N-telopeptide) (Kleerekoper, 1997). They are helpful in gauging the extent of bone turnover, the net result of these two processes. Because high bone turnover in the adult is always associated with a net negative calcium balance, elevated bone markers usually signify bone loss. They can be useful as an adjunct to direct bone mass measurement, which is a static index. Recent studies suggest that bone markers may, in fact, be an independent predictor of bone loss and can be correlated with fracture risk. In addition, bone markers are used to monitor the course of therapy for osteoporosis. For instance, substantial reduction in markers of bone resorption with antiresorptive therapy for osteoporosis (e.g., estrogen, bisphosphonates, calcitonin) predicts subsequent increases in bone mass as determined by direct measurement. The major limitations to the widespread use of these assays are the assays themselves, population variables, and day-to-day variability in the measurements. For changes to be helpful clinically, the results must exceed baseline values in general by greater than 30 percent. Nevertheless, bone markers are likely to become more useful as assay methods and collection procedures become more standardized. Local events such as trauma, fragility, and stress fracture will increase bone remodeling at sites so challenged. The clinical presentation of stress fracture represents the cumulative effects

OCR for page 29
of excess, repetitive mechanical forces applied to sites in the skeleton that are not competent to withstand such forces. The stress fracture is likely to be preceded by attempts at repair of the subclinical microdamage. Such repair mechanisms would involve activation of the remodeling processes normally associated with bone turnover. Using measurement of bone markers to detect these attempts at microrepair before fracture occurs has potential clinical value. However, it remains highly speculative whether such early events could be detected with the assays currently available, and it is unclear how such measurements would do more than call attention to such stresses on the skeleton. Such early indications of excessive mechanical stress would not necessarily eventuate in a clinical stress fracture. Bone Mass Of the wide variety of techniques available to measure bone mass, DXA is the most frequently used method, measuring total body as well as specific regions of bone. Since DXA involves measurement in only two dimensions, it does not provide a true estimate of density, the three-dimensional measurement (Beck et al., 1997). Quantitative computed tomography (QCT) is one method that measures true volumetric bone density, but it is limited in other ways (e.g., accuracy and radiation exposure). Methods employing ultrasound technology, still in the developmental phase, measure not only bone quantity, but also may incorporate a component of bone quality into the measurement (Kimmel et al., 1997). Thus, none of the current technologies—DXA, peripheral DXA (pDXA), QCT, peripheral QCT (pQCT) or ultrasound—can be used to screen for stress fractures. They are useful for bone density assessment, but there is scant literature to support the relationship between various levels of bone density and the risk of stress fractures. Genetic Markers An area of active investigation is a search for specific genes responsible for processes involved in the establishment of peak bone mass in youth and the progressive loss of bone mass with aging. Thus far, a number of candidate genes have been proposed. These include genes for the vitamin D receptor, the estrogen receptor, type 1 collagen, and IGF-1. The studies, which are still far from conclusive (Sambrook et al., 1996), nevertheless point clearly to a disorder that is likely to be polygenic, with a variety of genes contributing importantly to the osteoporotic process. Thus, each recruit brings to the field a different susceptibility to this condition and potentially a different susceptibility to fracture. Diet Adequate intakes of calcium and vitamin D are essential for optimizing bone health (IOM, 1997). Additionally, phosphorus, magnesium, and fluoride play a key role in the development and maintenance of bone. Other nutrients may have biological significance in the development and maintenance of bone, such as the microminerals copper, zinc, manganese, and boron and the

OCR for page 29
vitamins C and K. However, calcium remains the most critical nutrient for bone development; the majority of the body's calcium resides in bone. The recent Institute of Medicine Panel on Calcium and Related Nutrients used the maximal retention of calcium approach (the mathematical model that predicted the lowest level of calcium intake that maximized the amount of calcium retained in the body) for determining the adequate intake (AI) for calcium. This approach was used because increased calcium retention can be equated with higher bone mass as over 99 percent of total body calcium is found in bone (IOM, 1997, p. 4-13). The AI for calcium was an experimentally derived, approximate, group mean value for calcium that appears to support maximal calcium retention. An AI was considered for calcium, rather than an estimated average requirement (EAR3), due to: (1) uncertainties in the methods inherent in balance studies that were used to form the basis of the maximal retention model, (2) the lack of concordance between observational and experimental data, and (3) the lack of longitudinal data that could be used to verify the association of the experimentally derived calcium intakes for maximal retention with the rate and extent of long-term bone loss and its clinical sequelae, such as fracture (see IOM, 1997, p. S-4). The AI for calcium is 1,300 mg/d for 14 to 18 year olds and 1,000 mg/d for 19 to 30 year olds. No consistent association has been found between the occurrence of stress fracture and calcium intake in either athletes or military recruits, possibly due to the relatively high calcium intakes in both groups (Bennell et al., 1996; Schwellnus and Jordaan, 1992). However, nutritional surveys of military women, as described in Table 2-1, reveal a wide range of calcium intakes that tend to be consistently lower in a number of different military facilities than the Military Recommended Dietary Allowances (MRDAs). One study in athletes found that calcium intakes greater than 800 mg/d were protective against stress fractures (Myburgh et al., 1990). Other investigators have found no association between stress fracture occurrence and calcium intakes of 1,100 mg/d (Bennell et al., 1996). Another study found no reduction in stress fractures in a group of military recruits given supplemental calcium, compared with those not receiving supplements, although the number in the supplemented group may have been too small to detect an effect (Schwellnus and Jordaan, 1992). It is not known if dietary calcium requirements are increased during intense physical training. Decreased BMC was reported in trained male athletes in one report (Bilanin et al., 1989). Klesges et al. (1996), in a study of 11 National College Athletic Association Division I basketball players, reported a 6.1 percent decrease in total body BMC between the preseason and late summer, despite a mean intake of approximately 2,000 mg /d of calcium. The investigators attributed this loss of bone to negative calcium balance resulting from high calcium losses through sweat (mean of 247 mg/d). Although there was no control group in this study, a significant increase in BMC was observed the following year when the players were supplemented with varying doses of calcium (500–2,000 mg/d) and vitamin D (400–560 IU/d). The level of supplementation was matched to the degree of bone loss observed during the previous year. No 3   The EAR is the nutrient intake value that is estimated to meet the requirement defined by a specified indicator of adequacy in 50 percent of the individuals in a life-stage and gender group.

OCR for page 29
TABLE 2-1 Military Nutritional Surveys: Mean Nutritional Intake of Female Soldiers     Field   Dining Hall         Nutrient MRDA* Hawaii 1985 n = 36 Bolivia 1990 n = 13 West Point 1979–1980 n = 54 Ft Jackson 1988 n = 40 West Point 1990 n = 86 Ft Jackson 1993 n = 49 Ft Sam Houston 1995 n = 50** Energy, kcal 2,000–2,800 1,834 1,668 2,454 2,467 2,314 2,592 2,037 Protein, g 80 67 68 84 96 79 82 75 Carbohydrate, g 330 235 218 284 318 325 365 289 Fat, g <93 70 57 107 94 81 94 63 Calcium, mg 800–1,200 577 664 954 907 1,001 728 918 Phosphorus, mg 800–1,200 1,065 1,059 1,347 1,600 1,391 1,296 1,333 Magnesium, mg 300 — 218 — — 315 267 285 Iron, mg 18 11.9 11.7 16.2 18.4 28 16.2 16.8 * Military Recommend Dietary Allowance for moderately active military women aged 17–50 years. ** Officer basic training. SOURCE: Adapted from King (1993).

OCR for page 29
other studies of calcium balance during intense physical activity could be identified. Loss of calcium through sweat has previously been reported to be low (40–144 mg/d) (Charles et al., 1991). The factorial approach4 was used for estimating calcium requirements in adults aged 19 to 30 years (IOM, 1997) based on sweat losses obtained from a group of sedentary healthy volunteers (Charles et al., 1983), which averaged 63 mg/d over a 7-d period for both males and females. It is not known what the effects of prolonged, intense physical activity; stress; and altered eating patterns may have on calcium status in military women. Thus, further studies are needed to determine if physically active military women are at increased risk of not meeting their requirements for calcium due to increased loss of calcium in sweat. Increased intakes of protein, sodium, and caffeine have been suggested to adversely affect calcium balance and optimal bone health (IOM, 1997, p. 4-4 to 4-5). However, there are no reports of an increased risk of stress fracture with increased intake of these food components or decreased intake of the other key bone-related nutrients (phosphorus, vitamin D, magnesium, and fluoride). BODY COMPOSITION Fat mass contributes to bone mineral mass, and thus to fracture risk, through at least two mechanisms. First, fat mass, through its influence on total body weight, has a trophic effect on bone mineral. Greater body fat and weight are linked with heavier skeletal mass, independent of gender or age (Chen et al., 1997), although fat mass or adipose tissue is not a main bone density determinant in young women. Second, fat mass or more specifically adipose tissue, is an important site of estrogen production in postmenopausal women (Rebuffe-Scrive et al., 1986). Estrogens positively influence bone mineral mass and density. A well-established relationship also exists between skeletal muscle and bone mineral mass. In a classic study, Doyle et al. (1970) observed a significant correlation in human cadavers between vertebral bone mass and corresponding muscle. and Cohn (1975) extended these studies by demonstrating strong correlations at all ages between total body potassium, a measure of body cell mass, and total body calcium, a measure of bone mass. Burr (1997) examined the biomechanical link between skeletal muscle force or tension and corresponding bone mineral mass or density. According to Frost (as quoted by Burr), "voluntary muscle forces … dominate a bone's postnatal structural adaptations to mechanical usage." This link represents the classic biological relationship, "form follows function." These forces on bone are mainly transmitted by skeletal muscle tension and are not solely related to body weight. Forces generated by skeletal muscle tension and are not solely related to body weight. Forces generated by skeletal muscles appear to extend beyond generalized bone mineral effects; actual bone morphology and geometry are, to some extent, shaped by forces generated by skeletal muscles. Finally, as an example of the strong skeletal muscle–bone mineral association, quadriceps and hamstring skeletal muscle strength independently predict humerus and spine bone mineral density after controlling for such other relevant factors as 4   "[A] more traditional factorial approach for estimating calcium requirements is to sum calcium needs for growth (accretion) plus calcium losses (urine, feces, and sweat) and adjust for absorption. Using this method, estimates for calcium requirements for adolescent girls and boys [aged 9–18 years] are 1,276 and 1,505 mg (31.9 and 37.6 mmol)/d, respectively" (IOM, 1997, p. 4–26). For females aged 19 to 30 years, the estimated requirement would be 1,393 mg (34.8 mmol)/d and for males, 1,437 mg (35.9 mmol)/d.

OCR for page 29
body weight (Brukner, 1997). Skeletal muscle strength-force relations are thought to influence bone mineral mass and density, although additional research on this topic is needed (Burr, 1997). Thus, fat mass and skeletal muscle mass, two body composition components extrinsic to the skeleton, appear to have important influences on the growth, mass, shape, geometry, and strength of bone. Physical activities that generate a large muscle force can cause stress fractures at selected anatomic locations. For example, pelvic stress fractures involving the public rami that are often observed in women are thought to be related to excessive generated skeletal muscle force. Similarly, rowers and paddlers often suffer rib stress fractures (Matheson et al., 1987b). A related problem may occur in individuals who have a discrepancy in leg length. Discrepancies in contraction of skeletal muscles may cause an asymmetric load on bone that can produce a stress fracture (Matheson et al., 1987a). Several factors related to skeletal muscle mass and strength also favor inordinate bone loading and hence set the stage for stress fractures. Bones are encased in skeletal muscle, and muscles insulate bone and absorb mechanical loading. Muscle resists mechanical stress on bone, particularly some muscles that resist bending and torsion. For example, the tensor faciae latae muscle in the leg tends to resist bending of the femur and may transmit stresses resulting from bending (i.e., where bone is weaker to axial compression) at sites where the bone is stronger. Moreover, those individuals who have weaker muscles may fatigue more easily. The person's musculoskeletal system thereby has a reduced ability to resist continued bending and loading stresses. Inadequate skeletal muscle mass or strength may therefore favor transmission of unduly large forces to bone and thus permit development of stress fractures. In favor of this hypothesis, Brukner (1997) and Beck et al. (1996) observed reduced calf girth in women who suffered stress fractures during basic combat training. Although the conclusion is not clearly documented, smaller male soldiers are at greater risk of stress fracture than are larger male soldiers, and this increased risk may stem from either their reduced skeletal muscle mass or poorer physical conditioning. PHYSICAL ACTIVITY AND FITNESS Numerous cross-sectional studies in athletes have supported an association between BMD and long-term exercise behaviors (see review article, Suominen, 1993). In general, the increases in BMD appear to be specific to the regions that are being loaded (Alfredson et al., 1996; Taafe et al., 1997), and the effect appears to be long term (Etherington et al., 1996). Although the concept that weight-bearing activity determines the shape and mass of bone is generally accepted, results from controlled trials of physical activity in nonathletes are not consistent. A recent review of prospective trials of predominantly older women showed that a beneficial effect of increased activity on BMD was observed in those groups of individuals consuming a mean calcium intake greater than 1,100 mg/d of calcium. Results of trials in groups of individuals consuming less than 1,100 mg/d tended to show no benefit of activity on BMD (Specker, 1996). No prospective controlled study specifically designed to test the hypothesis that calcium intake modifies the bone response to activity has been reported. The term fitness has several meanings. In the first context, fitness refers to the level of cardiorespiratory reserve (aerobic capacity), and in the second, it refers to a more general readiness to participate in physical activity. The latter context includes adaptations such as muscular strength,

OCR for page 29
skill development, and musculoskeletal readiness. From the previous section on pathophysiology of stress fractures, it is clear that to avoid cumulative microdamage to bone, the rate of loading must closely match the rate of remodeling. However, unlike a low level of aerobic fitness in which the person is simply unable to complete a physical task at a higher intensity until metabolic and cardiorespiratory adaptations are sufficient, even sedentary people can tolerate a large volume of cyclic loading to their skeletal system for a period of several weeks before symptoms of stress fracture occur. The architectural adaptations to bone that permit cyclic loading at higher levels of physical activity are not required in the first few weeks of training, and any excessive strain produced during this early phase does not impair a person's ability to participate until such time as the cumulative stress produces clinically obvious symptoms. Thus, to attain fitness, a training program must include a history of sufficient loading and remodeling within bone if stress injuries and fractures are to be prevented during periods of intense training. The mechanical and cellular response of bone to loading is based on four factors: (1) the number of strain cycles experienced by the bone (volume of training, total cumulative load), (2) the frequency of strain cycles or load per unit time (the intensity, pace, time for recovery), (3) magnitude of strain of each load cycle (factors that affect shock absorption and shock attenuation include muscular strength and endurance, footwear, and lower extremity alignment factors), and (4) duration of each strain cycle (terrain, lower extremity alignment). From a clinical perspective, there are five factors that contribute to stress fracture occurrence. Training errors are increases in physical activity (loading) at a rate that exceeds the bone's rate of remodeling. The expression "too much too soon" includes all components of training such as volume, intensity, pace, and recovery. This is the single factor most likely to cause a stress fracture in a nonelite, recreational athlete or a recruit in basic military training (Matheson et al., 1987a). Muscle fatigue is also an important factor. Attenuation of ground reaction forces principally through eccentric contraction of muscle minimizes the load transmitted directly to bone. If muscle fatigue occurs during an exercise bout, bone will experience a greater load. Clinical measurement of muscular strength does not measure muscle fatigue, which is the metabolic capability to produce adenosine triphosphate (ATP) continuously through oxidative phosphorylation at the rate required to sustain a given workload. Resistance to fatigue only occurs through sport-specific training, and muscular adaptations that provide enhanced oxidative capacity take from 8 to 16 weeks to develop fully (Matheson et al., 1987a). Lower extremity alignment predisposes a person to the development of stress fractures. High arched, pes cavus-style feet absorb less shock, while pes planus ("flat foot") feet transmit more force to the tibia. Genu varum (angled inward or knock-kneed) and valgum (angled outward or bowlegged), excessive Q angles (the angle of intersection between the direction of pull on the patella by the quadriceps muscles and the direction of resistance by the patellar tendon), leg length discrepancies, and femoral neck anteversion are all associated with variations in gait that can influence the distribution of forces to bone in the lower extremity (Matheson et al., 1987a). Terrain influences force delivered to bone in two ways. Hard surfaces require greater shock absorption by lower leg muscles, and cambered or uneven surfaces require compensatory muscular contractions that alter the balance of loading between the two lower extremities.

OCR for page 29
Equipment, particularly footwear, can influence the load delivered to bone both with respect to shock absorption and whether the design accommodates the foot type and lower extremity alignment. Guidelines for selecting appropriate footwear are provided in the Army's Physical Fitness Training manual (FM 21-20, 1992), and recruits are often provided assistance in making the correct selection based on a screening exam they receive in their unit. A well-designed shoe (flexible soles, shock absorbing, good arch and heel support) can assist in reducing the number of lower limb injuries arising from sport and training activities. However, it has been reported that a training shoe is rarely implicated in a stress fracture injury (Nike, 1987). In the military, the general trend has been to employ athletic shoes rather than boots for physical training sessions, particularly for running, at least during the early stages of recruit training. Current U.S. Army training policy (Bentley, 1978) allows running shoes rather than military combat boots to be worn during training runs. This policy has been supported by a study in New Zealand Army recruits (Stacy and Hungeford, 1984). These investigators found that total injuries were reduced from 85 per 100 to 52 per 100 recruits by eliminating running in boots during the first 5 weeks and by gradually introducing boots progressively over this period. Comparisons between training shoes and combat boots have shown that boots are almost 50 percent worse in rear foot shock absorption, 100 percent less flexible, and 200 percent less shock absorbent in the forefoot (Cavanagh, 1980). Approximately one-third of the recruit's training time is spent in a running shoe (personal communication, L. Tomasi, Ft. Benning, 1998). Because injuries can be caused by running on hard surfaces, soldiers should, if possible, avoid running on concrete. Asphalt surfaces provide more cushioning than concrete. Soft, even surfaces are best for injury prevention such as grass paths, dirt paths, or park trails. However, with adequate footwear and recovery periods, running on roads and other hard surfaces should pose no problem. Fine-tuning of athletic footwear to specific foot types can also be helpful. The high arch, rigid, pes cavus foot benefits from a flexible shoe with a curved last and excellent shock-absorbing characteristics. The flexible pes planus style of foot benefits from a straight-sole shoe with a stiff heel counter that provides motion control. ORAL CONTRACEPTIVES Several investigators have shown an increase in BMD with oral contraceptive use in premenopausal women (Fortney et al., 1994; Goldsmith and Johnston, 1975; Lindsay et al., 1986; Recker et al., 1992; Shargil 1985) and past use in postmenopausal women (Enzelsberger et al., 1988; Goldsmith and Johnston, 1975; Kleerekoper et al., 1991; Kritz-Silverstein and Barrett-Connor, 1993), while others have failed to detect a beneficial effect on bone in premenopausal (Hall et al., 1990; Hreshchyshyn et al., 1988; Mazes and Barden, 1991; Murphy et al., 1993; Volpe et al., 1993) and postmenopausal women (Hreshchyshyn et al., 1988; Murphy et al., 1993). This discrepancy in results can be explained, at least in part by different dosages and durations of pill use, varying ages of the population, varying lengths of follow-up, insufficient sample size, inadequate control for confounders, and different bone sites and techniques for measurement of BMD (DeCherney, 1996; Garnero et al., 1995).

OCR for page 29
Consistent evidence supports a causal relationship between estrogen deficiency and osteoporosis and the efficacy of estrogen replacement therapy in reducing bone loss after cessation of ovarian function following oophorectomy, premature ovarian failure, and menopause (Mehta, 1993). However, important differences in the formulation of drugs, dosage, and regimen exist among studies evaluating hormone replacement therapy and oral contraceptives use. In retrospective epidemiological studies, women who used oral contraceptives for greater than 10 years had the greatest protection against low BMD (Enzelsberger et al., 1988; Kleerekoper et al., 1991). In a cross-sectional study of premenopausal women taking 30 µg ethinyl estradiol for a mean duration of 6.7 years, all markers of bone formation (serum osteocalcin, bone-specific alkaline phosphatase, C-terminal propeptide of type 1 collagen) and resorption (pyridinoline cross-linked peptides) were decreased (Garnero et al., 1995). However, total body BMC and BMD, and lumbar spine, total hip, and distal radius BMD did not differ between oral contraceptive users and nonusers. During the 20 years that combined estrogen-progestagen preparations have been used, the dosage has been modified to reduce side effects and complications. Most compounds now include 35 µg or less of estrogen and varying amounts of progestagen. The advent of very low dose oral contraceptives (< 20 µg/d) has reduced the thrombotic complications. Progestogenic components of oral contraceptives may have a positive impact on BMD, although evidence is less conclusive. Progestational agents such as Norethindrone have estrogen-like properties and interact with androgen receptors. Urinary calcium excretion was reduced and bone mass was stabilized or increased with use of certain progestational agents alone or in combination with estrogen (Riis et al., 1990; Stevenson et al., 1990; Surrey et al., 1990). Exogenous estrogen-progestagen hormones given to women of reproductive status may positively affect peak bone mass reached in adulthood and the rate of premenopausal bone loss, both of which are important for future fracture risks. In contrast, long-acting progestagens used alone may have a detrimental effect on BMD. Depot medroxyprogesterone acetate (Depo-Provera), an injectable long-acting progestagen, inhibits gonadotropin secretion and results in a relative estrogen-deficiency state. Disruption of menstrual cycling is common, with an incidence of 55 percent amenorrhea after 12 months of treatment and 68 percent after 24 months. In a cross-sectional study, 30 women using Depo-Provera for more than 5 years had decreased BMD at the lumbar spine and femoral neck compared with premenopausal women not using the preparation (Cundy et al., 1991). A 6-mo prospective, randomized trial comparing Depo-Provera with subdermal implanted progestin levonorgestral treatment (Norplant) was conducted in 22 premenopausal women, ages 20 to 45 years (Naessen et al., 1995). Forearm BMD increased 2.9 percent with Norplant and was stable with Depo-Provera treatment. Depo-Provera increased bone turnover, and Norplant increased bone formation as evidenced by increased serum alkaline phosphatase, osteocalcin, and estradiol. The duration of the trial may not have been long enough to observe the adverse effects of Depo-Provera that have been seen in other studies. Thus, the data suggest there are no detrimental effects of oral estrogen-containing contraceptives on bone health, and in fact, they may afford protection from loss of bone in some women. Because of the small, but significant increased risk of breast cancer with unopposed estrogen therapy, estrogen-progesterone replacement therapy combinations should be used. The use of various estrogen analogs such as Droloxifene or Raloxifine, which antagonize the oncogenic

OCR for page 29
effect of estrogen, offer promise for future research. Not enough is known yet about whether progestational agents, when used alone for contraception, will lead to bone loss. OTHER LIFESTYLE FACTORS Both smoking and alcohol are moderately associated with increased risk of fracture (Slemenda et al., 1989). Smoking is associated with lower body weight, earlier menopause, and lower postmenopausal estrogen levels, factors that each carry a risk of stress fractures (Heaney, 1996). The results of a recent meta-analysis noted an increased risk of hip fracture with cigarette smoking in postmenopausal women, which increased with increasing age (Law and Hackshaw, 1997). However, fracture risk did not increase with smoking in premenopausal women, and smokers were found to have similar BMD to nonsmokers. In a study of women during Army BCT, Westphal et al. (1995) noted that the consumption of alcohol appeared to be an independent risk factor for injuries, as a significant dose response was found for time-loss injuries in women who successively reported more days of alcohol consumption compared with nondrinkers. It is not known what percentage of these injuries were due to stress fractures and whether they might have been related to alcohol consumption. Alcohol abuse impairs the absorption, utilization, storage, and excretion of nutrients, which, in combination with inadequate intake, results in nutritionally compromised conditions. Excess alcohol consumption is often associated with low dairy food intake and, additionally, is a cause of hypercalciuria (Heaney, 1996). It is likely that high blood alcohol levels are directly toxic to osteoblasts (Laitinen and Valimaki, 1991). Increased alcohol use been shown to be marginally associated with increased risk of fracture (Slemenda et al., 1992). In a study performed by McDermott (1996), a survey was conducted in 1,000 premenopausal military women to assess dietary calcium intakes, physical activity, and habits affecting skeletal health. In a subset of 90 women who completed bone density testing by DXA, the attainment of peak bone mass was unaffected by history of smoking, alcohol, and caffeine use during high school. SUMMARY Stress fracture represents the cumulative effects of repetitive mechanical loading at a rate greater than that of osteoblastic new bone formation. Disruption of this remodeling balance leads to symptomatic microfracture, which if untreated, may progress to complete fracture. Measurements of bone mass or density are predictive of fracture risk in postmenopausal women, but data on the relationship between BMD and stress fracture are less plentiful. Current methods for the assessment of bone mass presently are not suitable to screen for stress fractures. Preliminary data suggest that the mean BMD of military recruits who suffer a stress fracture is lower than that of recruits not suffering a stress fracture (Beck, 1997). In addition, elite athletes with stress fractures have lower BMD than athletes without stress fractures (Bennell et al., 1996). However, both the fracture and nonfracture groups of military recruits and athletes had BMD measurements within the normal range. Moreover, except for rare instances where stress fractures result from an underlying metabolic bone disorder, there currently is no evidence that

OCR for page 29
the risk of developing osteoporosis is increased in individuals who previously suffered a stress fracture. Fat mass and skeletal muscle mass have important influences on bone that affect its growth, total mass, shape, geometry and strength. While skeletal muscles insulate bone and absorb mechanical loading, excessive muscular development can generate large muscle force and may cause stress fractures at selected anatomic locations. Regular participation in physical activity develops muscular strength, skill and readiness. The rate of loading must closely match the rate of bone remodeling in order to prevent injuries and stress fractures during BCT. Muscle fatigue, lower extremity alignment, terrain, and equipment are other key factors that contribute to the occurrence of stress fractures (Matheson et al., 1987a). Other factors that may influence the risk of stress fractures are exogenous estrogen-progestagen hormones given to women of reproductive age. These synthetic hormones may positively affect peak bone mass reached in adulthood and the rate of postmenopausal bone loss, both of which are important risks for future fracture. In contrast, long-acting progestagens may have a detrimental effect on BMD (Naessen et al., 1995).

OCR for page 29
This page in the original is blank.