National Academies Press: OpenBook

Optimizing the Use of Aircraft Deicing and Anti-Icing Fluids (2011)

Chapter: Chapter 3 - Holdover Time Variance Across an Airfield

« Previous: Chapter 2 - Promising De/Anti-Icing Source Reduction Practices
Page 28
Suggested Citation:"Chapter 3 - Holdover Time Variance Across an Airfield." National Academies of Sciences, Engineering, and Medicine. 2011. Optimizing the Use of Aircraft Deicing and Anti-Icing Fluids. Washington, DC: The National Academies Press. doi: 10.17226/14517.
×
Page 28
Page 29
Suggested Citation:"Chapter 3 - Holdover Time Variance Across an Airfield." National Academies of Sciences, Engineering, and Medicine. 2011. Optimizing the Use of Aircraft Deicing and Anti-Icing Fluids. Washington, DC: The National Academies Press. doi: 10.17226/14517.
×
Page 29
Page 30
Suggested Citation:"Chapter 3 - Holdover Time Variance Across an Airfield." National Academies of Sciences, Engineering, and Medicine. 2011. Optimizing the Use of Aircraft Deicing and Anti-Icing Fluids. Washington, DC: The National Academies Press. doi: 10.17226/14517.
×
Page 30
Page 31
Suggested Citation:"Chapter 3 - Holdover Time Variance Across an Airfield." National Academies of Sciences, Engineering, and Medicine. 2011. Optimizing the Use of Aircraft Deicing and Anti-Icing Fluids. Washington, DC: The National Academies Press. doi: 10.17226/14517.
×
Page 31
Page 32
Suggested Citation:"Chapter 3 - Holdover Time Variance Across an Airfield." National Academies of Sciences, Engineering, and Medicine. 2011. Optimizing the Use of Aircraft Deicing and Anti-Icing Fluids. Washington, DC: The National Academies Press. doi: 10.17226/14517.
×
Page 32
Page 33
Suggested Citation:"Chapter 3 - Holdover Time Variance Across an Airfield." National Academies of Sciences, Engineering, and Medicine. 2011. Optimizing the Use of Aircraft Deicing and Anti-Icing Fluids. Washington, DC: The National Academies Press. doi: 10.17226/14517.
×
Page 33
Page 34
Suggested Citation:"Chapter 3 - Holdover Time Variance Across an Airfield." National Academies of Sciences, Engineering, and Medicine. 2011. Optimizing the Use of Aircraft Deicing and Anti-Icing Fluids. Washington, DC: The National Academies Press. doi: 10.17226/14517.
×
Page 34
Page 35
Suggested Citation:"Chapter 3 - Holdover Time Variance Across an Airfield." National Academies of Sciences, Engineering, and Medicine. 2011. Optimizing the Use of Aircraft Deicing and Anti-Icing Fluids. Washington, DC: The National Academies Press. doi: 10.17226/14517.
×
Page 35
Page 36
Suggested Citation:"Chapter 3 - Holdover Time Variance Across an Airfield." National Academies of Sciences, Engineering, and Medicine. 2011. Optimizing the Use of Aircraft Deicing and Anti-Icing Fluids. Washington, DC: The National Academies Press. doi: 10.17226/14517.
×
Page 36
Page 37
Suggested Citation:"Chapter 3 - Holdover Time Variance Across an Airfield." National Academies of Sciences, Engineering, and Medicine. 2011. Optimizing the Use of Aircraft Deicing and Anti-Icing Fluids. Washington, DC: The National Academies Press. doi: 10.17226/14517.
×
Page 37
Page 38
Suggested Citation:"Chapter 3 - Holdover Time Variance Across an Airfield." National Academies of Sciences, Engineering, and Medicine. 2011. Optimizing the Use of Aircraft Deicing and Anti-Icing Fluids. Washington, DC: The National Academies Press. doi: 10.17226/14517.
×
Page 38
Page 39
Suggested Citation:"Chapter 3 - Holdover Time Variance Across an Airfield." National Academies of Sciences, Engineering, and Medicine. 2011. Optimizing the Use of Aircraft Deicing and Anti-Icing Fluids. Washington, DC: The National Academies Press. doi: 10.17226/14517.
×
Page 39
Page 40
Suggested Citation:"Chapter 3 - Holdover Time Variance Across an Airfield." National Academies of Sciences, Engineering, and Medicine. 2011. Optimizing the Use of Aircraft Deicing and Anti-Icing Fluids. Washington, DC: The National Academies Press. doi: 10.17226/14517.
×
Page 40
Page 41
Suggested Citation:"Chapter 3 - Holdover Time Variance Across an Airfield." National Academies of Sciences, Engineering, and Medicine. 2011. Optimizing the Use of Aircraft Deicing and Anti-Icing Fluids. Washington, DC: The National Academies Press. doi: 10.17226/14517.
×
Page 41
Page 42
Suggested Citation:"Chapter 3 - Holdover Time Variance Across an Airfield." National Academies of Sciences, Engineering, and Medicine. 2011. Optimizing the Use of Aircraft Deicing and Anti-Icing Fluids. Washington, DC: The National Academies Press. doi: 10.17226/14517.
×
Page 42
Page 43
Suggested Citation:"Chapter 3 - Holdover Time Variance Across an Airfield." National Academies of Sciences, Engineering, and Medicine. 2011. Optimizing the Use of Aircraft Deicing and Anti-Icing Fluids. Washington, DC: The National Academies Press. doi: 10.17226/14517.
×
Page 43
Page 44
Suggested Citation:"Chapter 3 - Holdover Time Variance Across an Airfield." National Academies of Sciences, Engineering, and Medicine. 2011. Optimizing the Use of Aircraft Deicing and Anti-Icing Fluids. Washington, DC: The National Academies Press. doi: 10.17226/14517.
×
Page 44
Page 45
Suggested Citation:"Chapter 3 - Holdover Time Variance Across an Airfield." National Academies of Sciences, Engineering, and Medicine. 2011. Optimizing the Use of Aircraft Deicing and Anti-Icing Fluids. Washington, DC: The National Academies Press. doi: 10.17226/14517.
×
Page 45
Page 46
Suggested Citation:"Chapter 3 - Holdover Time Variance Across an Airfield." National Academies of Sciences, Engineering, and Medicine. 2011. Optimizing the Use of Aircraft Deicing and Anti-Icing Fluids. Washington, DC: The National Academies Press. doi: 10.17226/14517.
×
Page 46
Page 47
Suggested Citation:"Chapter 3 - Holdover Time Variance Across an Airfield." National Academies of Sciences, Engineering, and Medicine. 2011. Optimizing the Use of Aircraft Deicing and Anti-Icing Fluids. Washington, DC: The National Academies Press. doi: 10.17226/14517.
×
Page 47
Page 48
Suggested Citation:"Chapter 3 - Holdover Time Variance Across an Airfield." National Academies of Sciences, Engineering, and Medicine. 2011. Optimizing the Use of Aircraft Deicing and Anti-Icing Fluids. Washington, DC: The National Academies Press. doi: 10.17226/14517.
×
Page 48
Page 49
Suggested Citation:"Chapter 3 - Holdover Time Variance Across an Airfield." National Academies of Sciences, Engineering, and Medicine. 2011. Optimizing the Use of Aircraft Deicing and Anti-Icing Fluids. Washington, DC: The National Academies Press. doi: 10.17226/14517.
×
Page 49
Page 50
Suggested Citation:"Chapter 3 - Holdover Time Variance Across an Airfield." National Academies of Sciences, Engineering, and Medicine. 2011. Optimizing the Use of Aircraft Deicing and Anti-Icing Fluids. Washington, DC: The National Academies Press. doi: 10.17226/14517.
×
Page 50
Page 51
Suggested Citation:"Chapter 3 - Holdover Time Variance Across an Airfield." National Academies of Sciences, Engineering, and Medicine. 2011. Optimizing the Use of Aircraft Deicing and Anti-Icing Fluids. Washington, DC: The National Academies Press. doi: 10.17226/14517.
×
Page 51
Page 52
Suggested Citation:"Chapter 3 - Holdover Time Variance Across an Airfield." National Academies of Sciences, Engineering, and Medicine. 2011. Optimizing the Use of Aircraft Deicing and Anti-Icing Fluids. Washington, DC: The National Academies Press. doi: 10.17226/14517.
×
Page 52
Page 53
Suggested Citation:"Chapter 3 - Holdover Time Variance Across an Airfield." National Academies of Sciences, Engineering, and Medicine. 2011. Optimizing the Use of Aircraft Deicing and Anti-Icing Fluids. Washington, DC: The National Academies Press. doi: 10.17226/14517.
×
Page 53
Page 54
Suggested Citation:"Chapter 3 - Holdover Time Variance Across an Airfield." National Academies of Sciences, Engineering, and Medicine. 2011. Optimizing the Use of Aircraft Deicing and Anti-Icing Fluids. Washington, DC: The National Academies Press. doi: 10.17226/14517.
×
Page 54
Page 55
Suggested Citation:"Chapter 3 - Holdover Time Variance Across an Airfield." National Academies of Sciences, Engineering, and Medicine. 2011. Optimizing the Use of Aircraft Deicing and Anti-Icing Fluids. Washington, DC: The National Academies Press. doi: 10.17226/14517.
×
Page 55

Below is the uncorrected machine-read text of this chapter, intended to provide our own search engines and external engines with highly rich, chapter-representative searchable text of each book. Because it is UNCORRECTED material, please consider the following text as a useful but insufficient proxy for the authoritative book pages.

28 Introduction Holdover time determination systems (HOTDS), such as D-Ice A/S Deicing Information System and NCAR Check- time, measure meteorological parameters at airport sites that then are used to calculate expected de/anti-icing fluid HOT, thus facilitating better fluid selection. HOT is di- rectly dependent on precipitation intensity, so it is vital that the intensity measured and used by the determination sys- tem reflects the highest intensity to which the aircraft may be exposed during its departure taxi. The key question, then, is whether a precipitation sensor at a single location at an airport can provide data with sufficient reliability for this application. This part of the report presents the results, findings, and con- clusions of an experiment to determine if a single location pre- cipitation sensor can reliably report precipitation conditions for the entire airport. Precipitation intensity was measured at sev- eral locations at an airport simultaneously. Tests were conducted over two winter seasons, 2007–08 and 2008–09. Preliminary Testing (Winter 2007–08) In the winter of 2007–08, Montreal-Trudeau airport (YUL) was selected as the primary location for testing. Montreal- Mirabel airport (YMX) was selected as an alternative site; how- ever, no data collection was gathered at that airport during the first season. Testing was performed during 11 natural precipitation events at YUL, and approximately 140 comparative data points were collected during this period. For each event, data collec- tion teams were separated by distances ranging from 4,200 to 13,300 ft at the airport. Data collected by each team included precipitation rate and other relevant meteorological param- eters affecting fluid HOT. The procedure consisted of collect- ing the precipitation rate data (as well as the other relevant data) over a 10-minute period at two sites at the airport si- multaneously using a stringent data collection protocol. The data collected were then compared to evaluate the variance in rates attributed to distance. In general, the results indicated that the data on rate of pre- cipitation was similar; therefore the overall variance on the resulting HOT values provided by the HOTDS positioned at these sites would be minimal. That said, the preliminary re- sults indicated that the rate variance increased as the distance between the rate collection sites increased and therefore that additional data should be gathered at sites separated by longer distances. A full statistical analysis of the data collected as part of this task was completed in August 2008. The results indi- cated that a single HOTDS positioned at a central location at an airport with small surface area would likely be sufficient to provide accurate information for the entire airport site. How- ever, at airports with large surface area, such as Denver Inter- national Airport, the distances from a central location at the airport to a departure runway may exceed 16,000 ft. No data for similar distances were collected in 2007–08, and it was therefore decided to conduct additional testing to verify whether airports with large surface areas would require addi- tional HOTDS installations to provide reliable data. Additional Testing (Winter 2008–09) During the winter of 2008–09, the work effort was ex- panded to collect data at three additional airports. This work satisfied two requirements: (1) the collection of data from sites with larger separation distances, approximately 25,000 to 29,000 ft and (2) the collection of data to examine between- site differences in precipitation rates as a result of lake-effect snowfall. The examination of variance at distances ranging from ap- proximately 25,000 to 29,000 ft required data collection at Mirabel Airport (YMX) and at Denver International Airport (DEN). The investigation of between-site differences in pre- cipitation rates was conducted at Syracuse Hancock Interna- C H A P T E R 3 Holdover Time Variance Across an Airfield

29 tional Airport (SYR), with a focus on lake-effect snow. Lake- effect snow is produced in the winter when cold arctic winds move across long expanses of warmer lake water, providing energy and picking up water vapor, which freezes and is de- posited on the lee shores. The areas surrounding lake-effect snow are called snow belts. As lake-effect snow can cause sig- nificant variance in precipitation rates in small areas, it was of particular interest to examine the ability of a single HOTDS to provide sufficient coverage at an airport site impacted by lake-effect snow. Research Approach and Methodologies Test Procedures for Data Collection The test procedure developed for use during HOTDS test- ing is based on the precipitation intensity measurement pro- cedure included in Society of Automotive Engineers (SAE) Aerospace Recommended Practice (ARP) 5485. This is the same rate measurement procedure that has been employed in the development of de/anti-icing fluid holdover time tables since 1990. Separate test procedures were developed for each work element; these procedures, while relying the same gen- eral methodology in data collection, were individualized to the particular airport with specific contact lists, airport dia- grams, and a communication plan. The test procedures are included in Appendix A. Focus Airports Montreal-Trudeau International Airport YUL (Montreal, Quebec) Montreal-Trudeau Airport (YUL) was selected as the pri- mary location for the preliminary test program as efficien- cies were obtained by conducting research at the airport where the APS test site is located. The test procedure for data collection was developed, tested, and refined at this primary location. The majority of all data collection was completed at YUL. Mirabel International Airport YMX (Mirabel, Quebec) Mirabel Airport (YMX) was selected to serve as a test area for long distance data collection. YMX was envisioned to be the second largest airport in the world in terms of surface area, with a planned area of 39,660 hectares (396.6 km2). Eco- nomic factors eventually led to YMX being relegated to the role of a cargo airport, and therefore was not expanded to this planned area. However, YMX still provided the necessary dis- tance for an appropriate analysis. Denver International Airport DEN (Denver, Colorado) Denver International Airport (DEN) is one of the largest airports in the world from the perspective of surface area, with over 30,000 ft separating certain active runway depar- ture points. For this reason, as well as for reasons related to the historical nature and severity of winter precipitation in Denver, DEN was selected as a desired test location for the continuation of the HOTDS long distance collection. Syracuse Hancock International Airport SYR (Syracuse, New York) Syracuse Hancock International Airport (SYR) was identi- fied as an ideal airport to collect lake-effect snowfall data. Lake-effect snow on the Tug Hill Plateau (east of Lake On- tario) can frequently set daily records for snowfall in the United States. Syracuse, New York is directly south of the Tug Hill Plateau and receives significant lake-effect snow from Lake Ontario. Snowfall amounts at this location are signifi- cant and average 115.6 in (294cm) a year. Test Locations and Remote Test Unit Test Locations All data collection was collected on non-airside land sur- rounding each airport. Locations used were kept non-air- side to minimize disruptions to airport operations. In addi- tion, it allowed APS personnel full autonomy to come and go as required by each precipitation event. Typical locations were: • Perimeter roads (city owned); • Perimeter business parking lots; • Long-term parking lots; and • Fixed-base operator parking lots. Remote Test Unit A remote laboratory was established with all the neces- sary testing equipment installed into a 16-foot cube van (Figure 1). This allowed for testing in any desired remote location. Testing at the off-site location was conducted in a mobile test unit, housed within a cube van, and powered by generators. Equipment and Methodology for Precipitation Measurement A snow-catching methodology was employed in this re- search. This test procedure was developed based upon the rate measurement methodology employed for holdover time

30 testing and described in SAE ARP 5485. Because it was neces- sary to acquire data with limited errors, a far more comprehen- sive and stringent methodology was applied to the procedure for this testing. The method establishes a rate of icing intensity by catching the precipitation with a known-dimension pan over a specified period of time. This allows for a subsequent calcula- tion of the rate usually represented in g/dm2/h. The following sections describe in detail the test equipment used in this snow-catching methodology. Snow-Catch Pan A snow-catch pan, placed at a 10° inclination on the test stand, was used to collect and weigh precipitation. The posi- tioning of the snow-catch pan on the test stand was such that the longer dimension axis of the pan is parallel with the longer dimension axis of the test plate. A typical serving pan commonly found in the restaurant industry proved to be an adequate snow-catching pan. A matching lid allowed full control of precipitation collection. Four snow-catch pans were employed at each site. Figure 2 shows the pans that were used in testing. Figure 3 is an accurate depiction of the dimensions of each pan. Test Stand Specially designed test stands were fabricated to form-fit the snow-catch pans and ensure that the pans would sit at a 10° inclination. This 10° inclination is representative of the leading edge of an aircraft wing. In testing locations where ground surfaces were uneven, the test stands were manually leveled. There were no flanges or obstructions close to the edges of the plates that could interfere with the airflow over the collection pans. Figure 4 depicts the test stand. The test stand was oriented facing into the predominant wind direction. A test stand is defined as facing into the wind when the long axes of the collection plates are facing into the wind direction. Wind direction was constantly monitored and adjustments were made, but not, however, during any 10-minute collection period. Precipitation Measurement Balance A Sartorius EA series balance was employed for all testing. With a resolution of 0.2 g, this balance allowed for an accu- rate reading of precipitation accumulation. Figure 5 depicts the balance. Methodology for Snow-Catch Collection Four snow-catch pans were used, numbered from one to four. Each pan was coated with 450 ml of standard Type IV fluid. The wetted pans were weighed to the nearest 0.2 g. All four pans were placed under precipitation for a period of 10 minutes. The snow-catch pans were turned 180° at in- tervals of two minutes to ensure that no snow build-up would occur at either end of the pan. Past research has proven that pan rotation ensures no loss of accumulation and hence gives the true precipitation accumulation. At the end of a 10-minute period, all four pans were re- weighed. The difference in weight before and after exposure to precipitation was used to compute the precipitation rate. Other Equipment Other support equipment used in the field are described in Appendix A. Sequence of Events The following sections describe the timing and communi- cation protocols as well as the sequence of testing protocols Figure 1. Remote test unit. Figure 2. Snow catch pan.

that were followed during the precipitation events at each airport. Timing and Communication Timepieces were synchronized before testing commenced. A detailed schedule of events was distributed before testing and an agreed upon start time established. In order to achieve simultaneous collection of precipita- tion, a well-organized system of communication was incor- porated into all testing. Standard Motorola VHF radios were employed and used frequently in testing; sometimes cell phones were employed. Sequence of Testing All testing followed the same sequence. This allowed for the collection of three measurements per hour. This sequence was identical for both testing locations at each airport. The typical sequence for the first collection period is de- tailed in Table 19. Ten minutes elapsed between the end of the first collection and the start of the second collection. The typical sequence for the second collection period is de- tailed in Table 20. X1 = 450 Y1 = 245 mm X = 498 Y = 293 24 24 TOP SIDE 55 mm Figure 3. Dimensions of snow-catch pan. Figure 4. Test stand. Figure 5. Precipitation measurement balance. 31

32 Ten minutes elapsed between the end of the second collec- tion and the start of the third collection. The typical sequence for the third collection period is detailed in Table 21. A similar sequence was used for subsequent measure- ments. Personnel For most of the initial testing in the winter of 2007–08, four APS personnel were required for testing. For the YUL tests, two personnel operated the mobile unit in the remote location and two remained at the APS test site. However for the second winter of testing, the test technicians were more experienced and only two personnel were needed to operate the two mobile units used for testing at YMX, DEN, and SYR. Data Forms One general data form was used to record precipitation collection. This data form is shown in Figure 6. Description of Data and Methodology Used to Process The data collected for the holdover time variance across an airfield task and its processing are described in this section. Tests Conducted Tests were conducted over two winter seasons, 2007–08 and 2008–09, at nine different pairs of collection sites. Tests Conducted in the Winter of 2007–08 Tests conducted during the winter of 2007–08 included nine snowstorm events from February 1, 2008 to March 13, 2008. These tests were all conducted at Montreal-Trudeau Airport, at various locations. A total of 126 tests were con- ducted. Of these, the data from 18 tests was excluded from analysis. The principal reasons for excluding data were cases where the measured precipitation rate exceeded 50 g/dm2/h, and cases where there was a considerable amount of blowing snow at one of the test sites. After exclusion of this data, 108 tests remained, and this set of tests was subjected to analysis. Table 19. Typical sequence for the first collection. Time Tester 1 Tester 2 T = - 5 Minutes Weigh and record initial weight Weigh and record initial weight T = -3 Minutes Verify wind direction and adjust stand Verify wind direction and adjust stand T = -2 minutes Place two covered plates on stand Place two covered plates on stand T = 0 Remove Covers Remove Covers T= 2 minutes Rotate Pans T= 4 minutes Rotate Pans T= 6 minutes Rotate Pans T= 8 minutes Rotate Pans T= 10 minutes Cover pans and bring in for measurement Cover pans and bring in for measurement Table 20. Typical sequence for the second collection. Time Tester 1 Tester 2 T = 18 minutes Place pans back on stand. Verify wind direction and adjust stand Place pans back on stand. Verify wind direction and adjust stand T = 20 minutes Remove Covers Remove Covers T= 22 minutes Rotate Pans T= 24 minutes Rotate Pans T= 26 minutes Rotate Pans T= 28 minutes Rotate Pans T= 30 minutes Cover pans and bring in for measurement Cover pans and bring in for measurement

Time Tester 1 Tester 2 T = 38 minutes Place pans back on stand. Verify wind direction and adjust stand Place pans back on stand. Verify wind direction and adjust stand T = 40 minutes Remove Covers Remove Covers T= 42 minutes Rotate Pans T= 44 minutes Rotate Pans T= 46 minutes Rotate Pans T= 48 minutes Rotate Pans T= 50 minutes Cover pans and bring in for measurement Cover pans and bring in for measurement Table 21. Typical sequence for the third collection. Figure 6. Implementation of holdover time determination systems data form. 33

34 As described, each test set consisted of data collected simul- taneously at two separate sites. At each site, precipitation was measured simultaneously on four rate pans. Thus, each test set comprised a total of eight data points. Different testing sites were used during the course of the test season to produce data for various distances between test sites. Six different pairs of sites were used for testing, with sep- aration distances as shown in Table 22. The test site locations and separation distances are shown in Figure 7. Figure 8 shows that the longest distance between departure points was 13,991 ft (4,265 m). Figure 9 shows dis- tances from a central site to runway departure points. One Table 22. Test site locations for Winter 2007–08—Montreal-Trudeau Airport. Separation Distance Site 1 Location Site 2 Location (ft) (m) Number of Events Number of Tests Chemin St François 45º 28’ 33” N 73º 45’ 12” W APS Test Site 45º 28’ 6” N 73º 44’ 28” W 4,167 1,270 1 18 Marshall Rd Snow Dump 4º 27’ 28” N 73º 44’ 14” W APS Test Site 45º 28’ 6” N 73º 44’ 28” W 4,232 1,290 3 40 Ch. Cote Vertu 45º 28’ 40” N 73º 43’ 33” W APS Test Site 45º 28’ 6” N 73º 44’ 28” W 5,052 1,540 1 9 Chemin St François 45º 28’ 33” N 73º 45’ 12” W Ch. Cote Vertu 45º 28’ 40” N 73º 43’ 33” W 7,017 2,139 2 15 Chemin St François 45º 28’ 33” N 73º 45’ 12” W Marshall Rd Snow Dump 46º 27’ 28” N 73º 44’ 14” W 7,933 2,418 1 16 APS Office Parking Lot 45º 28’ 60” N 73º 41’ 36” W APS Test Site 45º 28’ 6” N 73º 44’ 28” W 13,390 4,081 1 10 Ch. St. Francois Ch. Cote Vertu Marshall Rd. Snow Dump APS Office Parking Lot APS Test Site 7,017 Feet 4,167 Feet 5 ,05 2 F ee t 7,933 Feet 13, 390 Fe et 4,232 Feet MONTREAL Figure 7. Site locations: Montreal Trudeau International Airport (YUL).

Figure 8. Longest active distance at Montreal Trudeau International Airport (YUL). Figure 9. Distances from APS test site to departure runways at Montreal Trudeau International Airport (YUL). 35

test site of each pair was located close to the airport central deicing facility (CDF), thus the noted distances are a good in- dication of typical distances from a centrally located HOTDS to runway departure points at YUL airport. The selected site pairs generated separation distances that provided a good reflection of the airport geography. Tests Conducted in the Winter of 2008–09 Tests conducted during 2008–09 included six snowstorm events from December 9, 2008 to April 4, 2009. Tests were conducted at three different airports as shown in Table 23. A total of 135 tests were conducted. Of these, the data from one test was removed due to a measurement error and excluded from the analysis. Testing at SYR offered the opportunity to study lake-effect snowfall. Precipitation rates were recorded during one such event. Figure 10 shows the site locations at SYR. Figures 11 and 12 show the locations of the two test sites at YMX and DEN, respectively. The long separation distances between runway departure points are apparent from these images. 36 Table 23. Test site locations for Winter 2008–09. Separation Distance Airport Site 1 Location Site 2 Location (ft) (m) Number of Events Number of Tests Montreal-Mirabel International Airport (YMX) Cargo C Parking Lot 45º 40' 46" N 74º 02' 53" W Ch. Charles Parking Lot 45º 41' 27" N 73º 56' 17" W 28,500 8687 2 41 1 (non-lake-effect) 9 Syracuse Hancock International Airport (SYR) Tuskegee Rd Parking Lot 43º 06' 19" N 76º 06' 56" W South Bay Rd. Parking Lot 43º 07' 01" N 76º 08 '33" W 8,300 2530 1 (lake-effect) 32 Denver International Airport (DEN) E. 71st Ave Parking Lot 39º 54' 26" N 104º 40' 13" W Trussville St. Parking Lot 39º 54' 01" N 104º 40' 12" W 27,800 8473 2 52 South Bay Rd. Parking Lot Tuskegee Rd. Parking Lot 8,300 Feet Syracuse (SYR) Test Locations Figure 10. Site locations: Syracuse Hancock International Airport (SYR).

37 Cargo C Parking Lot Ch. Charles Parking Lot 28,500 Feet Mirabel (YMX) Test Locations Figure 11. Site locations: Mirabel International Airport (YMX). 28,500 Feet Trussville St. Parking Lot E. 71St Ave. Parking Lot Denver (DEN) Test Locations Figure 12. Site locations: Denver International Airport (DEN).

Summary of Test Events Fifteen test events were completed over the course of the two winter seasons. Table 24 summarizes the events and the conditions present for each event. The table provides a description of the test dates, location of the tests, separa- tion distances between sites, number of tests conducted for each event, along with the weather conditions (average temperature, wind speed, and predominant precipitation condition). Test Data Log The log of test data collected over the two winters and sub- jected to analysis is included in Appendix B. Each row in this log contains data specific to one test set and records data collected at both test locations during an event. The log of data is separated by event and sorted sequen- tially by calendar date as the tests were conducted. Table 25 ex- cerpts the details of tests that were conducted for Event #1 on February 1st, 2008 at YUL airport with a separation distance of 4,232 ft. Following is a brief description of the column headings used in the test log: • Set no.: Sequential number given to the test set. Sequenc- ing was restarted at number 201 for the second season. Certain tests that were removed from the analysis are not included in this log; • Time before: Time at start of test; • Time after: Time at end of test; • Pan delta: Measured weight (in grams) of precipitation collected during the test, for each of the four rate pans; • Closest to mean: Two “closest to mean” pans chosen for further analysis; • Average variance: Average variance (in grams); • Average variance: Average variance (in percent); • Temp: Outside air temperature during the test session (in °C); • Wind dir: Direction of wind on rate pan (in 10’s deg); • Wind speed (kph): Wind speed (in kilometers per hour); • Visibility: Visibility (in km); and • Weather: Description of snow intensity. The test data logs for the remaining 14 events are included in Appendix B. These test logs contain all the data collected and subjected to analysis. Scatter Diagram of Logged Data Figure 13 provides a depiction of the precipitation data collected at short distances in both winters as a scatter dia- gram. The x and y coordinates for each point reflect the pre- cipitation rates measured at Site 1 and Site 2. The best-fit line drawn through the points shows limited scatter. Figure 14 shows a similar scatter plot of the medium separation dis- tance data; the data collected at SYR during the lake-effect event is shown with a different symbol. Figure 15 charts the precipitation data collected at the long separation distances (YMX and DEN). The charts clearly show that the precipitation rate differ- ences increase as a function of site separation distance. 38 Table 24. Summary of test events. Ev ent # Date Location Separation Distance (ft) # of Tests Average Temp (°C) Average Wind Speed (kph) Predominant Weather 1 Feb 1, 2008 YUL 4,232 16 -5.3 32 Snow, Ice Pellets 2 Feb 5, 2008 YUL 4,232 13 -4.4 21 Sno w 3 Feb 9, 2008 YUL 5,052 9 -3 12 Sno w 4 Feb 13, 2008 YUL 4,167 18 -9.3 19 Sno w 5 Feb 15, 2008 YUL 4,232 11 -5.3 15 Sno w 6 Feb 26, 2008 YUL 7, 017 10 -4.2 33 Sno w 7 Mar 5, 2008 YUL 7, 017 5 -8.1 43 Sno w 8 Mar 8, 2008 YUL 7,933 16 -2.1 29 Sno w, Fog 9 Mar 13, 2008 YUL 13,390 10 -7.9 20 Sno w 10 Dec 9, 2008 YMX 28,500 24 -11.5 11 Light Sno w 11 Dec 12, 2008 YMX 28,500 17 -7.1 9 Light Sno w 12 Jan 8, 2009 SYR 8,300 9 -3.2 21 Light Sno w 13 Jan 8, 2009 SYR 8,300 32 -5.3 25 Light Sno w 14 Mar 26, 2009 DEN 27, 800 35 -9.9 32 Light Sno w 15 Apr 4, 2009 DEN 27,800 17 -0.1 22 Light Sno w Total: 242

Difference Between Avg. Best Variance in Avg. Best Temp. Wind Direction Wind Sp Visibility Weather 1 2 3 4 AVG Best 1 Best 2 Avg Best 1 2 3 4 AVG Best 1 Best 2 Avg Best Absolute Value (g) % (°C) (10's deg) (kph) (km) 1 13:00 13:10 61.4 60.6 59.2 59 60.1 60.6 59.2 59.9 62.2 61 53.4 52.2 57.2 61 53.4 57.2 2.7 4.60% -7.1 5 33 0.8 Moderate Snow 2 13:20 13:30 86.8 81.6 79.6 78 81.5 81.6 79.6 80.6 79 77 71.6 71.6 74.8 77 71.6 74.3 6.3 8.10% -6.7 5 33 0.8 Moderate Snow 3 13:50 14:00 87.6 91 86.8 87.4 88.2 87.6 87.4 87.5 87 87.2 86.8 89.2 87.6 87.2 87 87.1 0.4 0.50% -6 5 26 0.8 Moderate Snow 4 14:10 14:20 129 125 121 119 124 125 121 123 135 133 135 134 134 135 134 134 11.3 8.80% -5.7 5 26 0.8 Moderate Snow 5 14:40 14:50 103 103 106 103 104 103 103 103 114 113 114 116 114 114 114 114 10.8 9.90% -5.4 7 30 3.2 Snow ,Ice Pellets 6 15:00 15:10 59.6 60.8 58.2 62 60.2 59.6 60.8 60.2 68 66.8 68.6 69.4 68.2 68 68.6 68.3 8.1 12.60% -4.9 7 30 3.2 Snow ,Ice Pellets 7 15:20 15:30 61.6 60 59.2 60 60.2 60 60 60 64.6 62.8 66.2 66.8 65.1 64.6 66.2 65.4 5.4 8.60% -4.9 7 30 3.2 Snow ,Ice Pellets 8 15:40 15:50 52.6 56.8 53.8 53.6 54.2 53.8 53.6 53.7 44 43.8 44 44.8 44.2 44 44 44 9.7 19.90% -5 5 33 1.6 Snow 9 16:00 16:10 93.6 90.4 88.8 90.2 90.8 90.4 90.2 90.3 77.6 78.6 80.8 79.2 79.1 79.2 78.6 78.9 11.4 13.50% -5 5 33 1.6 Snow 10 17:40 17:50 54.6 55.6 53.6 55.8 54.9 54.6 55.6 55.1 50.6 50.8 51.6 52.2 51.3 51.6 50.8 51.2 3.9 7.30% -5.2 6 32 1.2 Snow 11 18:00 18:10 99.6 98.8 99.6 98.6 99.2 98.8 99.6 99.2 102 102 101 103 102 102 102 102 2.8 2.80% -5 6 32 1.2 Snow 12 18:20 18:30 98.8 100 102 99.6 100 100 99.6 99.9 107 110 109 110 109 110 109 109 9.2 8.80% -5 6 32 1.2 Snow 13 18:40 18:50 93 93.8 92.8 93.2 93.2 93.2 93 93.1 105 107 107 108 107 107 107 107 14.1 14.10% -4.9 6 30 2 Snow ,Ice Pellets 14 19:00 19:10 104 103 104 103 104 104 103 103 119 120 118 121 119 120 119 119 16 14.40% -4.9 6 30 2 Snow ,Ice Pellets 15 19:30 19:40 118 117 116 117 117 117 117 117 119 120 120 124 121 120 120 120 2.9 2.40% -4.9 6 37 2.4 Snow ,Ice Pellets 16 19:50 20:00 113 111 110 111 111 111 111 111 121 120 123 123 122 121 123 122 10.4 8.90% -4.9 6 37 2.4 Snow ,Ice Pellets 17 20:10 20:20 102 104 101 104 103 102 104 103 110 111 113 113 112 111 113 112 8.8 8.20% -4.8 6 37 2.4 Snow ,Ice Pellets SITE 2 Variance Analysis MSC Data Pan Delta (g) Closest to Mean (g) Pan Delta (g) Closest to Mean (g)Set No. Time Before Time After SITE 1 EVENT #1 FEBRUARY 1, 2008 MONTREAL (YUL) (4,232 ft Separation) SITE A: Marshall Road Snow Dump Facility (45º 27' 28" N 73º 44' 14" W) SITE B: APS Test Facility (45º 28' 6" N 73º 44' 28" W) Table 25. Example of detailed test log, Event #1.

40 R2 = 0.98 0 10 20 30 40 50 60 0 10 20 30 40 50 60 Rate Site 1 R at e Si te 2 Short Distance (4,167 to 5,052 ft.) Scatter Plot 67 Points Figure 13. Precipitation rate comparison data for short separation distances. R2 = 0.93 R2 = 0.31 0 10 20 30 40 50 60 0 10 20 30 40 50 60 Rate Site 1 R at e Si te 2 Medium Distance (7,017 to 13,390 ft.) Lake Effect (8,300 ft.) Linear (Medium Distance (7,017 to 13,390 ft.)) Linear (Lake Effect (8,300 ft.)) Medium Distance (7,017 to 13,390 ft.) Scatter Plot with Lake Effect Separated 82 Points Figure 14. Precipitation rate comparison data for medium separation distances.

An R2 value was determined for each of the data sets in Fig- ures 10 to 12: • Short separation distance R2=0.98; • Medium separation distance R2=0.93; • Long separation distance R2=0.91; and • Lake-effect data R2=0.31. The R2 parameter provides a sense of the variance in the data and shows that the variance between the two sites increases as the distance between the sites is increased. The R2 value for the lake-effect snowfall data clearly shows that this type of precip- itation event also increases the variance in precipitation rate. While the variation of the points around the best-fit lines reflects random effects, it may also indicate real differences in precipitation between sites. The task of the analysis is to iden- tify which of the data sets result from random effects, and which reflect real differences, and for those differences that are real, to evaluate their operational significance. The next sections describe the approach taken to answer these questions. Data Analysis The analysis was applied to the consolidated data collected over the two test seasons. The analysis was aimed at determining the effect that any real difference in precipitation between the two sites would have on fluid holdover times. The initial treatment of the data thus required calculation of precipitation rates, fol- lowed by calculation of fluid holdover times for a variety of fluids. Subsequently, the calculated fluid holdover times for each precipitation data point were examined statistically to deter- mine which test sets had differences that could be ascribed to random effects and which had real differences in holdover times generated by each of the two sites. The analysis is described in detail in the following sections. Calculation of Precipitation Rates The precipitation rate calculation is based on the measured weight of precipitation collected over a measured time span, on a surface of known dimensions. The rate pans used to collect precipitation had a surface area of 14.53 dm2 (1.56 ft2). The duration of test time was deter- mined from the data test start and end time. For a 10-minute test interval, the precipitation rate is calculated as: Rate g dm h Weight of collected precipitati 2[ ] = on g dm h2 [ ] [ ] [ ]14 53 10 60. i 41 R2 = 0.91 0 10 20 30 40 50 60 0 10 20 30 40 50 60 Rate Site 1 R at e Si te 2 Long Distance (27,800 to 28,500 ft.) Scatter Plot 93 Points Figure 15. Precipitation rate comparison data for long separation distances.

Any test sets where the average rate calculated for a site exceeded 50 g/dm2/h, were excluded from the analysis. The rationale for this exclusion is as follows. The currently published holdover time guidelines for snow have an upper limit on precipitation rates: • Very light snow: 4 g/dm2/h; • Light snow: 4 and 10 g/dm2/h; and • Moderate snow: 10 and 25 g/dm2/h. During the process of collecting fluid endurance time data to generate HOT guidelines, a good deal of data has been col- lected during heavy snow events, that is, beyond 25 g/dm2/h. In the past, this data has served to enhance the accuracy of the regression equations used to develop HOT guidelines for snow at rates up to moderate. However, there has been inter- est in extending the guidelines to reflect rates greater than moderate. Discussions at the 2006 SAE G-12 HOT subcom- mittee meeting indicated that an upper limit should not go beyond 50 g/dm2/h considering that: • The frequency of heavy snow (> 25 g/dm2/h) is about 3%; • Most of this occurs in the range 25 to 50 g/dm2/h; and • Most of the endurance time data in heavy snow was col- lected in the range 25 to 50 g/dm2/h. This analysis has taken the perspective of evaluating the risk when holdover times would actually be generated, and thus only those test sets are examined. Calculation of Fluid Holdover Times Holdover time guidelines, which are published annually, provide pilots with tables of the protection times provided by de/anti-icing fluids in winter conditions. The values in the holdover time tables are developed through regression analy- sis of recorded fluid endurance time data. Aircraft de/anti-icing fluid holdover time is a function of fluid dilution, precipitation rate, precipitation type, and am- bient temperature. All the tests reported here were conducted in snow conditions. The following regression equation is used to calculate hold- over times in snow: where: t = time (minutes); R = rate of precipitation (g/dm2/h); T = temperature (°C); and I, A, B = coefficients determined from the regression. This equation substitutes 2-T for the variable T in order to prevent taking the log of a negative number, as natural snow can occur at temperatures approaching +2°C. HOTDS produce holdover times by applying the same re- gression equations and coefficients used to calculate the val- ues in the current holdover time guidelines. To assess the effect that separate HOTDS sites might have on holdover times, each measured precipitation weight data point was converted to holdover time, using the regression equations and coefficients for a selection of fluid brands that are currently in operational use. Those fluid brands and strengths are given in Table 26, along with the regression coefficients used to calculate holdover times provided in the winter 2007–08 guidelines. Although dif- ferent regression coefficients may apply when ambient temper- atures are lower than −14°C, such temperatures did not occur during data collection. In accordance with the current practice for HOT table devel- opment for snow, holdover times were capped at 120 minutes. Holdover times for fluid strength of 50/50 concentration are constrained to OAT −3°C and above. This constraint resulted in setting aside some data sets when evaluating the effect on this fluid’s holdover times. Statistical Analysis to Compute HOT Difference Between Sites Each test set consisted of data collected simultaneously at two separate sites. At each site, precipitation was measured simultaneously on four rate pans. Thus, each test set usually comprised a total of eight data points. The foregoing analy- t R TI A B = −( )10 2 42 Table 26. Fluid holdover time regression coefficients. Coefficients SAE Type I Clariant Safewing MP IV 2012 Protect 100/0 Octagon MaxFlo 100/0 Kilfrost ABC-S 75/25 Kilfrost ABC-S 50/50 I 2.0072 2.9261 3.0846 2.5569 2.3232 A -0.5752 -0.6725 -0.8545 -0.7273 -0.8869 B -0.5585 -0.5399 -0.3781 -0.1092 -0.2936

sis then produced eight values of fluid holdover time for each test set. In several cases, only three data points were recorded at one of the sites, and the statistical analysis took this into account. The objective then was to examine the difference in HOTs for the two sites. Because a maximum of four HOT values existed for each of the two sites for each test, the comparison was conducted using small sample theory. This complies with the general rule that statistical analysis of samples with size less than 30 must be corrected for sample size. The Student—t distribution, which corrects for sample size, was applied to learn if there was a statistically significant difference between the holdover times generated at the two site locations, for each test set. In tests such as these, where a small number of data points exist for each of two conditions and the objective is to deter- mine whether there is a difference between the two, the com- mon analytical approach is to apply a null hypothesis, which assumes that there is no difference between the two sets. This assumption enables the two databases for each test set to be combined, which produces a better estimate of the popula- tion standard deviation. The analysis then examines the data to statistically test the null hypothesis. Before the t-test can be used in this way, the two sets of four HOTs must first be examined to see if their variances are suf- ficiently alike to justify the assumption that they each could be estimates of the same population variance. This examina- tion of statistical variance uses the F-distribution. If there is a significant difference in statistical variance, then those test sets cannot be combined. For those test sets, a different statistical approach using separate variances t-test is applied. The F-test was applied to the HOTs for each test set in the following manner: • Calculate F-value, which is the ratio of the statistical variance of the two sets of HOTs, with the highest in the numerator; • Retrieve the appropriate F-value from an F-distribution table calculated for a 0.05 significance level. The tables are format- ted by number of degrees of freedom for the numerator and denominator, with the highest number in the numerator. The number of degrees of freedom is the site test sample size minus 1 (nx − 1); • Compare the calculated F-value to the table F-value; • If the calculated F-value is less than the table F-value, then one can assume that the variances of the two sets are not sig- nificantly different, and the data from the two sites can be combined for the t-test using the null hypothesis approach; and • If the calculated F-value is greater than the table F-value, then the variances of the two sets are significantly different, and the data from the two sites cannot be combined for the t-test. These data sets are then analyzed using a separate variances t-test. For ease of description, the following refers to tests com- prised of four HOTs for each site. The actual analysis exam- ined the real number of samples recorded. The t-test for those test sets that passed the F-test was applied, as follows: • Calculate standard deviation (SD) for each site; and • Calculate a combined variance for the two sites and its square root for a combined SD. The combination is weighted by degrees of freedom: where: SD = standard deviation SD2 = variance n1 = number of tests in test set from site 1 • Calculate t: • Compare calculated t-value to t-table value for t at a signif- icance level of 0.025, for 6 degrees of freedom (n1 + n2 − 2). This is a two-tailed test, thus the resulting level of signifi- cance is 0.05; and • If the calculated t is less than the t-table value, then accept the hypothesis that there is no difference between the two tests. The t-test for those test sets that failed the F-test was ap- plied, as follows: • Calculate t: • Calculate degrees of freedom: • Compare calculated t-value to t-table value for t at a signif- icance level of 0.025, for the calculated number of degrees of freedom; and • If the calculated t is less than the t-table value, then accept the hypothesis that there is no difference between the two tests. D. F. SD n SD n SD n n SD n 1 1 2 1 = +( ) ( ) −( ) + 1 2 2 2 2 2 2 1 2 2 1 2 2n ( ) −( ) 2 1 t mean holdover time mean holdover time1 2 = −( ) sqrt variance n variance n1 1 2 2+( ) t mean holdover time mean holdover time1 2 = −( ) combined SD sqrt 1 n n1 2 +( )[ ]1 Combined Variance n SD n SD n = −( ) + −( )( )1 12 2 22 1 1 1 + −( )n2 2 43

The test sets were then sorted to separate those that the t-test indicated were from the same population, from those that were from different populations. For tests that were determined to be from the same population, we can consider that any difference between the two sites is a random event due to sampling, and can assume there is no real difference between sites. For tests that were from different populations, the differ- ence in mean holdover times was then calculated as a percent- age of the lower of the two values. These values, as well as the absolute difference in holdover times, were then examined for significance from an operational perspective and for any dependency on distance between sites. Secondary Analysis Methodology to Account for CARs Exemption An exemption from Canadian Aviation Regulations (CARs) (1) pertaining to ground deicing operations has been granted to a Canadian carrier for the purpose of permitting operational use of HOTs generated by a HOTDS. This exemption is sub- ject to a number of conditions, some of which affect the calcu- lation of holdover times. Those conditions are: a) Holdover times shall be calculated on the basis of mea- sured precipitation rates, increased by certain tolerances: • From 0 to 10 g/dm2/h: + 3.0 g/dm2/h; • Above 10 to 25 g/dm2/h: + 6.0 g/dm2/h; and • Above 25 g/dm2/h: + 14.0 g/dm2/h. b) The precipitation rate input for the purpose of computing fluid holdover times shall not be less than 2.0 g/dm2/h. c) Holdover time determinations shall be inhibited in snow conditions exceeding 50 g/dm2/h. d) Holdover time determinations in snow for Type II and IV de/anti-icing fluids shall be capped at 120 minutes. A secondary analysis was conducted wherein the mea- sured data was adjusted according to these conditions. The statistical analysis then proceeded as described for the base case. This process caused a further number of test sets to be ex- cluded from analysis when their actual precipitation rate was less than 2.0 g/dm2/h or their augmented precipitation rate value exceeded 50 g/dm2/h. Findings and Applications The findings and applications of the work completed for the HOT variance across an airfield task are presented in this section. They are presented as follows: • Between-site Differences in HOT. This section provides a summary of the between-site differences in HOTs along with the operational significance of these differences. A secondary analysis was also completed that examines the between-site differences using the conditions that are stip- ulated in the CARs exemption; • Examination of Site Separation Distance. This section ex- amines the relationship of site separation distance and the extent of HOT differences due to these distances; • Examination of Lake-effect Snowfall on HOT Differences. This section examines the impact of lake-effect snow; • Comparison of HOTDS Results to Current Operational Practices. This section provides a comparison of the HOTDS results to example cases of information that pilots might de- rive from the use of current operational practices which use METAR reports or the FAA/TC visibility tables to estimate HOTs; and • HOTDS Implementation Strategy and Timeline. This sec- tion describes possible implementation strategies. Between-Site Differences in HOT Based on the previously described data analysis methodol- ogy, potential differences in HOTs were determined for spe- cific fluids from the consolidated data collected over the two test seasons. This analysis was conducted to determine the effect that any real difference in precipitation rate between the two sites would have on fluid HOTs. The initial treatment of the data thus required calculation of precipitation rates, followed by calculation of fluid HOTs for a variety of fluids. The calculated HOTs for each precipitation data point were examined statistically to determine which test sets had differences that could not be attributed to random effects and the extent of difference in HOTs generated for each of the two sites. The analysis produced a table of results for each fluid; as an example, the table for Octagon MaxFlo 100/0 is presented in Table 27. This fluid provides a good example of the extent of differ- ence in holdover times based on data collected at two sepa- rate sites. Similar charts for all fluids examined are included in Appendix C. Columns 1 through 4 show the test set number, the dis- tance between sites, the outside air temperature (OAT) at which the test was conducted, and the average precipitation rate in snow based on all rate measurements from both sites. Columns 5 and 6 show the mean fluid HOTs for each site, calculated for Octagon MaxFlo at 100/0 strength; Column 7 shows the calculated difference between Columns 5 and 6; Column 8 is the percentage difference between Columns 5 and 6, based on the lower of the two values; and Columns 9 and 10 show the number and percentage of test sets grouped by various parameters for between-site differences. For this fluid, this grouping shows: 44

45 Table 27. Holdover time differences for Octagon MaxFlo 100/0 at two sites. Comparison of Endurance Times (min) Number of Tests in Specified Difference Range Set No. Distance Between Sites (feet) Temp (°C) Avg. Rate Both Sites (g/dm2/h) Site 1 Site 2 Difference Difference as % of Lowest Site # % Test sets concluded as coming from same population 95 39% Test sets forced to equality by 120 minute rule 37 15% Test sets where difference is <20% 69 29% 221 28500 -9 17.2 47.7 39.6 8.1 20.5% 237 28500 -6 13.2 55.9 67.5 11.6 20.7% 95 7933 0.4 16.3 103.7 85.6 18.0 21.1% 312 27800 -10 6.0 113.8 93.9 19.8 21.1% 248 8300 -3.3 15.9 67.9 55.5 12.4 22.3% 65 4232 -3.2 11.0 93.1 76.1 17.0 22.4% 254 8300 -4 5.0 98.0 120.0 22.0 22.4% 233 28500 -6 10.0 70.0 86.6 16.6 23.7% 276 8300 -5.6 7.1 96.0 118.8 22.8 23.7% 304 27800 -10 12.3 50.5 62.6 12.2 24.1% 235 28500 -6 10.9 65.1 80.9 15.8 24.3% 258 8300 -4 9.3 103.2 82.6 20.6 25.0% 231 28500 -8 5.9 120.0 95.6 24.4 25.6% 275 8300 -6 8.0 83.7 105.6 21.8 26.1% difference range from 20 to <30% 306 27800 -10 10.4 72.9 57.8 15.2 26.2% 293 27800 -10 13.7 57.7 45.3 12.3 27.2% 218 28500 -10 23.7 36.2 28.4 7.8 27.4% 203 28500 -14 7.9 83.4 65.4 18.0 27.5% 242 8300 -3.3 14.4 75.8 59.0 16.8 28.5% 299 27800 -10 14.1 44.3 56.9 12.7 28.6% 251 8300 -3.3 8.4 93.1 120.0 26.9 28.9% 294 27800 -10 10.6 72.3 56.0 16.3 29.0% 210 28500 -12 13.2 56.6 43.8 12.8 29.2% 241 28500 -6 7.5 114.3 88.3 25.9 29.4% 228 28500 -8 11.6 72.4 55.8 16.5 29.6% 25 10% 300 27800 -10 12.3 64.6 49.7 14.9 30.1% 201 28500 -14 7.2 68.4 89.1 20.7 30.3% 230 28500 -8 7.0 85.2 111.6 26.4 30.9% 290 27800 -10 9.2 82.7 63.0 19.7 31.2% 107 7933 -3.9 24.4 35.8 47.0 11.3 31.5% 30 4232 -4.8 8.8 107.7 80.1 27.5 34.4% difference range from 30 to <50% 317 27800 -10 8.1 68.6 94.4 25.7 37.5% 246 8300 -3.3 22.9 53.0 38.5 14.5 37.7% 229 28500 -8 9.6 62.8 90.2 27.4 43.6% 9 4% 308 27800 -10 9.0 60.7 91.7 31.0 51.0% 240 28500 -6 7.8 120.0 77.0 43.0 55.8% 259 8300 -5 6.3 76.8 120.0 43.2 56.2% 226 28500 -9 7.4 72.2 117.3 45.0 62.3% 249 8300 -3 12.3 62.1 104.7 42.6 68.7% difference range >50% 253 8300 -4 9.4 69.4 120.0 50.6 72.9% 252 8300 -4 21.0 100.4 30.6 69.8 227.8% 7 3% Total tests analyzed 242 • 39% of 242 test sets had no significant difference in HOTs between sites; • 15% of all test sets were forced to equality by the 120-minute rule; • 29% had between-site holdover time differences less than 20%; • 10% had between-site holdover time differences from 20 to 30%; • 4% had between-site holdover time differences from 30 to 50%; and • 3% had between-site holdover time differences greater than 50%.

Operational Significance of Between-Site Differences in HOT Of the 242 test sets analyzed for the Octagon MaxFlo 100/0, 46% showed real between-site differences in HOTs. The ex- tent of the difference and its operational significance varied greatly among the data sets. To assess the likely impact on field operations, the absolute size of between-site differences was examined. Table 28 shows average values for between-site dif- ferences and HOT for selected ranges. This format shows the relationship between absolute HOT differences and the aver- age value of HOT generated at both sites, and demonstrates an increase in HOT difference as the between-site differences grow larger. For the range where the between-site differences are above zero but less than 10% of the lowest site, the average HOT dif- ference was 3 minutes and the average HOT at both sites was 50 minutes. A difference of 3 minutes on a base of 50 minutes is not considered to be of large operational importance. Similarly, for the next highest range, between 10 and 20%, the average HOT difference was 8 minutes and the average HOT at both sites was 64 minutes. Although larger than in the previous range, the difference of 8 minutes on a base of 64 min- utes is still not judged to be of great operational importance. For the range where the between-site differences lie be- tween 20 and 30%, the average HOT difference was 17 min- utes and the average HOT at both sites was 8 minutes. A dif- ference of 17 minutes on a base of 83 minutes may have op- erational consequences. For the last range, where the between-site differences are greater than 30%, the average HOT difference was 32 min- utes and the average HOT at both sites was 75 minutes. A dif- ference of 32 minutes on a base of 75 minutes has a definite operational effect. It was concluded from this analysis that between-site differ- ences in HOTs on the order of 20 to 30% are of potential operational interest, and between-site differences greater than 30% are of definite interest. Summary of Differences—Base Case The analysis described in the previous section was applied to each fluid in the manner shown for Octagon MaxFlo 100/0 in Table 28. The resulting tables are provided in Ap- pendix C; Table 29 is a summary of the results for all fluids examined. For all fluids examined, there was no statistical difference in the HOT times for the two sites for 39 to 40% of the data sets collected. Holdover times for a number of data sets for thickened non-Newtonian fluids were constrained to 120 minutes, with the consequence that there was no difference in HOT be- 46 Table 28. Assessment of operational significance (based on Octagon MaxFlo). Range of Between-Site Differences as % of Lowest Site Tests Within Range (%) Average Between- Site Difference in HOT (min) Average HOT Both Sites (min) 0 to 10 11% 3 50 10 to 20 18% 8 64 20 to 30 10% 17 83 30 and higher 7% 32 75 Table 29. Summary of between-site difference in fluid holdover time—base case. SAE Type I Fluid Clariant MP IV 2012 100/0 Octagon MaxFlo 100/0 Kilfrost ABC-S 75/25 Kilfrost ABC-S 50/50 No Statistical Difference 94 39% 95 39% 95 39% 95 39% 17 40% Forced to Equality by 120 Minute Rule 0 0% 26 11% 37 15% 15 6% 1 2% < 20 % 110 45% 88 36% 69 29% 88 36% 12 29% 20 to 30 % 13 5% 21 9% 25 10% 27 11% 4 10% 30 to 50 % 14 6% 7 3% 9 4% 10 4% 2 5% Test Sets where Dif. in Endurance Time as % of Lowest Site is: > 50 % 11 5% 5 2% 7 3% 7 3% 6 14% Total Test Sets Analyzed 242 100% 242 100% 242 100% 242 100% 42 100%

tween the two sites. Full strength fluids were affected by this rule 11 to 15% of the time. Other than the 50/50 mix, all fluids showed between-site HOT differences greater than 20% of the lower site value, 14 to 18% of the time. For thickened fluids at full strength and 75/25 mix, be- tween-site HOT differences greater than 30% were expected 5 to 7% of the time and differences greater than 50% were ex- pected 2 to 3% of the time. For Type I fluid, between-site HOT differences greater than 30% were expected 11% of the time and differences greater than 50% were expected 5% of the time. For the 50/50 fluid strength case, between-site HOT differ- ences were larger than for the other fluids, with HOT differ- ences greater than 20% about 29% of the time, greater than 30% about 19% of the time, and greater than 50% about 14% of the time. Summary of Differences—CAR Exemption Case A secondary analysis was conducted wherein the measured data were adjusted according to the conditions described for the CARs exemption case. The statistical analysis proceeded as described above for the base case. Table 30 summarizes the differences using the CARs ex- emption conditions. The total number of tests analyzed is lower than for the base case due to the exclusion of test sets when their actual precipitation rate was less than 2.0 g/dm2/h or when their augmented precipitation rate value exceeded 50 g/dm2/h. Fewer test sets are affected by the 120-minute capping rule as a consequence of the higher augmented precipitation rates and shorter HOT values. In comparison to the base case, there is a decrease in the frequency of differences in the range from 20 to 30% and an increase in the range from 30 to <50%. A major reason for these changes is the stepped augmentation of measured rates in accordance with the CARs exemption. In the case of the Octagon fluid, of the 18 data set pairs falling in the 30 to <50% difference range, 10 experienced a differential in aug- mentation, where the measured rates of one site were slightly below 10 g/dm2/h and thus were augmented by 6 g/dm2/h, while the rates of the other site were over 10 g/dm2/h and thus were augmented by 14 g/dm2/h. Examination of Site Separation Distance To examine the relationship of distance between test sites and size of between-site HOT differences, data sets were sorted by distance for the base case only. The tests were grouped into three distance ranges that each offered a reasonably large and similar number of tests. The distance range limits are shown in Table 31. The results for all data are shown in Table 31, and provide the following findings: • At the shortest distance range for all fluids, there was only one case of between-site differences greater than 30%; • For the Type I fluid base case, the frequency of tests gener- ating a percentage difference greater than 20% increased from: – 4% at the shortest distance; to – 25% at mid-range distance; and – 15% at the longest distance. • For the Clariant 2012 100/0 base case, the frequency of tests generating a percentage difference greater than 20% in- creased from: – 1% at the shortest distance; to – 14% at mid-range distance; and – 21% at the longest distance. 47 Table 30. Summary of between-site difference in fluid holdover time—CAR exemption case. SAE Type I Fluid Clariant MP IV 2012 100/0 Octagon MaxFlo 100/0 Kilfrost ABC-S 75/25 Kilfrost ABC-S 50/50 No Statistical Difference 69 36% 69 36% 69 36% 69 36% 9 31% Forced to Equality by 120 Minute Rule 0 0% 10 5% 11 6% 0 0% 0 0% < 20% 94 49% 81 42% 73 38% 82 43% 11 38% 20 to 30% 19 10% 15 8% 13 7% 18 9% 2 7% 30 to 50% 7 4% 13 7% 18 9% 17 9% 5 17% Test Sets where Dif. in Endurance Time as % of Lowest Site is: > 50% 2 1% 3 2% 7 4% 5 3% 2 7% Total Test Sets Analyzed 191 100% 191 100% 191 100% 191 100% 29 100%

48 Table 31. Relationship of between-site differences and distance—all data 2007 to 2008. Ty pe I Hold ov er Times in Snow - Measure d Rates Number of tests wh ere difference betw een sites is Distance Range (ft) <20 % 20 % to 29.9 % 30 % to 49.9 % > 50 % To tal 4167 5052 64 3 67 96% 4% 100% 7017 13390 62 4 8 8 82 76% 5% 10% 10% 100% 27800 28500 78 6 6 3 93 84% 6% 6% 3% 100% 204 13 14 11 242 Total tests anal yz ed 84% 5% 6% 5% 100% Clariant 2012 100/0 Hold ov er Times in Snow - Measured Rates Octagon Maxflo 100/0 Holdo ve r Times in Snow - Measure d Rates Number of tests wh ere difference betw een sites is Number of tests wh ere difference betw een sites is Distance Range (ft) <20 % 20 % to 29.9 % 30 % to 49.9 % > 50 % Total Distance Range (ft) <20 % 20 % to 29.9 % 30 % to 49.9 % > 50 % To tal 4167 5052 66 1 67 4167 5052 65 1 1 67 99% 1% 100% 97% 1% 1% 100% 7017 13390 70 6 1 5 82 7017 13390 68 8 2 4 82 85% 7% 1% 6% 100% 83% 10% 2% 5% 100% 27800 28500 73 14 6 93 27800 28500 68 16 6 3 93 78% 15% 6% 100% 73% 17% 6% 3% 100% 209 21 7 5 242 201 25 9 7 242 Total Tests An al yz ed 86% 9% 3% 2% 100% Total Tests An al yz ed 83% 10% 4% 3% 100% A BC-S 75/25 Holdo ve r Times in Sno w - Measured Rates A BC-S 50/50 Holdo ve r Times in Sno w - Measured Rates Number of tests wh ere difference betw een sites is Number of tests wh ere difference betw een sites is Distance Range (ft) <20 % 20 % to 29.9 % 30 % to 49.9 % > 50 % Total Distance Range (ft) <20 % 20 % to 29.9 % 30 % to 49.9 % > 50 % Total 4167 5052 66 1 67 4167 5052 12 12 1% 100% 100% 100% 7017 13390 65 9 3 5 82 7017 13390 8 3 0 2 13 11% 4% 6% 100% 62% 23% 0% 15% 100% 27800 28500 67 17 7 2 93 27800 28500 10 1 2 4 17 72% 18% 8% 2% 100% 59% 6% 12% 24% 100% 198 27 10 7 242 30 4 2 6 42 Total Tests An al yz ed 82% 11% 4% 3% 100% Total Tests An al yz ed 71% 10% 5% 14% 100%

• For the Octagon MaxFlo 100/0 base case, the frequency of tests generating a percentage difference greater than 20% increased from: – 2% at the shortest distance; to – 17% at mid-range distance; and – 26% at the longest distance. • For the ABC-S 75/25 base case, the frequency of tests gen- erating a percentage difference greater than 20% increased from: – 1% at the shortest distance; to – 21% at mid-range distance; and – 28% at the longest distance. • For the ABC-S 50/50 base case, the frequency of tests gen- erating a percentage difference greater than 20% increased from: – 0% at the shortest distance; to – 38% at mid-range distance; and – 42% at the longest distance. The mid-range distance showed a higher frequency of cases having between-site differences greater than 50%. This examination shows that a relationship does exist be- tween site-separation distance and size of between-site hold- over time differences. Examination of Lake-Effect Snowfall on HOT Differences The impact of lake-effect snowfall was examined by looking at the lake-effect snowfall data in isolation and comparing it to other data collected within the same site-separation range. The lake-effect data was collected at a between-site distance of 8,300 ft, placing it in the mid-range for distance analysis. To examine its influence on HOT at the two sites, the lake-effect data was compared to the other data collected at the mid- range distance. The results are given in Table 32. Because the lake-effect data was collected at an OAT lower than −3°C (26.6°F), fluid ABC-S at the 50/50 strength is not included in the analysis. The table shows that the frequency of cases where the between-site difference in HOT is 20% or more of the lower site value is substantially greater for the lake-effect data. Much of the increase shows up in the above 50% difference category. Relationship Between Site-Separation-Distance and Between-Site HOT Differences Excluding Lake-Effect Data Syracuse Hancock International Airport was selected for tests as it offered an opportunity to study lake-effect snowfall. Precipitation rates were recorded during one event, on Janu- ary 8, 2009. The lake-effect snowfall data was included in the previous analysis of between-site HOT differences versus distance sep- aration between sites. Because this data occurred only in the mid-range distance, it would distort the true relationship of between-site HOT differences versus distance. The base case data was re-examined with the lake-effect snowfall data re- moved. The results of the analysis with the lake-effect snow- fall data removed are given in Table 33. Removal of the lake-effect data produces a smoother rela- tionship of HOT difference to distance, removing the bulge at the mid-range distance seen in the previous analysis (Table 31). The final results by fluid type are: • For the Type I fluid base case, the frequency of tests generat- ing a percentage difference greater than 20% increased from: – 4% at the shortest distance to – 12% at mid-range distance and – 15% at the longest distance. 49 Table 32. Effect of lake-effect snowfall on between-site HOT differences (2007–2008). Number of Tests where Difference Between Sites is # (%) Fluid Type Distance Range 7017 to 13390 ft <20% 20 to 29.9% 30 to 49.9% >50% Total no lake-effect 44 (88%) 2 (4%) 4 (8%) 0 (0%) 50 (100%) Type I HOTs in Snow lake-effect 18 (56%) 2 (6%) 4 (13%) 8 (25%) 32 (100%) no lake-effect 45 (90% 3 (6%) 1 (2%) 1 (2%) 50 (100%) Clariant 2012 100/0 HOTs in Snow lake-effect 25 (78%) 3 (9%) 0 (0%) 4 (13%) 32 (100%) no lake-effect 44 (88%) 3 (6%) 2 (4%) 1 (2%) 50 (100%) Octagon MaxFlo 100/0 HOTs in Snow lake-effect 24 (75%) 5 (16%) 0 (0%) 3 (9%) 32 (100%) no lake-effect 43 (86%) 4 (8%) 2 (4%) 1 (2%) 50 (100%) ABC-S 75/25 HOTs in Snow lake-effect 22 (69%) 5 (16%) 1 (3%) 4 (13%) 32 (100%)

50 Table 33. Relationship of between-site differences and distance (excluding lake-effect data 2007–2009). Clariant 2012 100/0 Holdover Times in Snow - Measured Rates (without lake-effect data) Octagon MaxFlo 100/0 Holdover Times in Snow - Measured Rates (without lake-effect data) Number of tests where difference between sites is Number of tests where difference between sites is Distance Range (ft) <20% 20% to 29.9% 30% to 49.9% > 50% Total Distance Range (ft) <20% 20% to 29.9 % 30% to 49.9 % > 50% Total 4167 5052 66 1 67 4167 5052 65 1 1 67 99% 1% 100% 97% 1% 1% 100% 7017 13390 45 3 1 1 50 7017 13390 44 3 2 1 50 90% 6% 2% 2% 100% 88% 6% 4% 2% 100% 27800 28500 73 14 6 93 27800 28500 68 16 6 3 93 78% 15% 6% 100% 73% 17% 6% 3% 100% 184 18 7 1 210 177 20 9 4 210 Total tests analyzed 88% 9% 3% 0% 100% Total tests analyzed 84% 10% 4% 2% 100% ABC-S 75/25 Holdover Times in Snow - Measured Rates (without lake-effect data) ABC-S 50/50 Holdover Times in Snow - Measured Rates (no lake-effect data due temp restriction for 50/50) Number of tests where difference between sites is Number of tests where difference between sites is Distance Range (ft) <20% 20% to 29.9% 30% to 49.9% > 50% Total Distance Range (ft) <20% 20% to 29.9 % 30% to 49.9 % > 50% Total 4167 5052 66 1 67 4167 5052 12 12 99% 1% 100% 100% 100% 7017 13390 43 4 2 1 50 7017 13390 8 3 0 2 13 86% 8% 4% 2% 100% 62% 23% 0% 15% 100% 27800 28500 67 17 7 2 93 27800 28500 10 1 2 4 17 72% 18% 8% 2% 100% 59% 6% 12% 24% 100% 176 22 9 3 210 30 4 2 6 42 Total tests analyzed 84% 10% 4% 1% 100% Total tests analyzed 71% 10% 5% 14% 100% Type I Holdover Times in Snow - Measured Rates (without lake-effect data) Number of tests where difference between sites is Distance Range (ft) <20% 20% to 29.9% 30% to 49.9 % > 50% Total 4167 5052 64 3 67 96% 4% 100% 7017 13390 44 2 4 50 88% 4% 8% 0% 100% 27800 28500 78 6 6 3 93 84% 6% 6% 3% 100% 186 11 10 3 210 Total tests analyzed 89% 5% 5% 1% 100% • For the Clariant 2012 100/0 base case, the frequency of tests generating a percentage difference greater than 20% in- creased from: – 1% at the shortest distance to – 10% at mid-range distance and – 21% at the longest distance. • For the Octagon MaxFlo 100/0 base case, the frequency of tests generating a percentage difference greater than 20% increased from: – 2% at the shortest distance to – 12% at mid-range distance and – 26% at the longest distance.

• For the ABC-S 75/25 base case, the frequency of tests gen- erating a percentage difference greater than 20% increased from: – 1% at the shortest distance to – 14% at mid-range distance and – 28% at the longest distance. • For the ABC-S 50/50 base case, the frequency of tests gen- erating a percentage difference greater than 20% increased from: – 0% at the shortest distance to – 38% at mid-range distance and – 42% at the longest distance. Comparison of HOTDS Results to Current Operational Practices A brief comparison was made of HOT guideline times that were in effect during the testing versus HOT times as gener- ated by HOTDS systems using the precipitation measure- ments at the two test sites. This analysis is based on the “base case” and does not consider the CARs exemption criteria. The values that could have been in use by pilots were con- structed from the current HOT guidelines, existing weather information (METAR reports), and the visibility chart that is used to convert visibility to snowfall rate. METAR is a routine aviation weather report that typically comes from airports or permanent weather observation sta- tions. Reports are generated once an hour. If conditions change significantly, they can be updated in special reports. A typical METAR report contains data for the temperature, dew point, wind speed and direction, precipitation, cloud cover and heights, visibility, and barometric pressure. A METAR report may also contain other information including precipitation amounts. To establish the HOT values that could have been in effect, actual METAR reports in effect during selected tests were re- trieved from archives. The METAR report gives the pilot two alternative ways to establish a value for snow intensity, which is then used to extract holdover time from the HOT guidelines: a) Using information on METAR visibility and time of day (daylight or darkness), the snowfall intensity can be read from a visibility chart that is part of the HOT guidance material. That snowfall intensity and temperature can then be used to select the appropriate cell in the HOT table; and b) The METAR report also gives a direct indication of snow- fall intensity (light, moderate, or heavy). This indicated snowfall intensity, along with temperature, can be used by the pilot to select the appropriate cell in the HOT table. In addition, in an actual operation, the pilot has the option of visually estimating visibility distance (based on runway markers or local landmarks) and converting that value to snow intensity using the visibility table. This approach was not available for this comparison. Comparison of Snow Intensity Indicated by METAR Reports and Test Data To examine the differences in snow intensity from the dif- ferent sources, Table 34 was developed for four selected tests. The column headings show the source for the indicated snow intensity. Test 95 offers a good illustration of the variance in METAR-indicated intensity that pilots have to deal with in actual operations, with one indication being heavy and the other one light, whereas the actual measured intensity was moderate. Test 97 also shows a significant variance in METAR- indicated intensity, with one indication being heavy and the other one light, whereas the measured intensity was light and very light. Comparison of HOT Values Based on METAR and Test Data The snow intensity indications shown in Table 34 were then used to construct Table 35 with holdover times. 51 Table 34. Comparison of snow intensity from different pilot aids. Snow Intensity HOTDS Test # Time Interval Daylight/ Darkness Visibility (Statute Miles) OAT (°C) METAR Visibility Report and Visibility Chart METAR Snow Intensity Report Site 1 Measured Snow Intensity (g/dm2/h) Site 2 Measured Snow Intensity (g/dm2/h) 95 00:10 - 00:20 Darkness 3/4 0 Heavy Light 14.5 18.1 107 15:10 - 15:20 Daylight 1/2 -4 Moderate Moderate 28.3 20.5 97 00:50 - 01:00 Darkness 1/2 0 Heavy Light 6 3.6 123 22:40 - 22:50 Darkness 3 -8 Light Light 5.8 3.6

The differences in snow intensity derived from the various sources have a large impact on holdover guidelines: • The maximum snow intensity covered in the HOT table is moderate, thus for any snow intensity indications of heavy, there is no HOT value available to the pilot. • Similarly, in cases where the METAR report leads to an in- dication of light snow, the HOT table for Types II and IV fluid will provide a HOT time based on moderate snow. This HOT time will usually be notably shorter than is re- ally necessary for light snow. • In Test 95, where the METAR reported the intensity of snow as light, the corresponding HOT table provides a range of holdover time from 40 to 90 minutes, based on moderate snow. The interpretation of this range can lead to further shortening of holdover times. The Transport Canada Holdover Time Guidelines caution as follows: “The only acceptable decision-making criterion, for take- off without a pre-takeoff contamination inspection, is the shorter time within the applicable holdover time table cell.” Thus, in Test 95, the applicable holdover time based on METAR would be 40 minutes. • For the two test sites, the HOT values shown are based on the actual test data. In the case of Test 95, the HOT values are 104 and 86 minutes, much longer than that based on the METAR report. • The same observations apply to the other tests selected for comparison. In all cases, the shortest HOT of the two test sites is longer than the applicable HOT derived from METAR sources. • The range in HOT determined by METAR snow intensity (e.g., for Test 95, 90-40= 50 min) compared to the variance in HOT (e.g., for Test 95, 104-86= 18 min) from the two test sites is shown in Table 36. HOTDS Implementation Strategy and Timeline The examination of HOT generated from METAR indica- tions showed that there is a genuine possibility that very dif- ferent values can result from the two alternative ways of ap- plying METAR forecasts. The use of METAR indications to generate HOT has some inherent shortcomings: • An important one is its frequency of issue, generally on an hourly basis; • The HOT values generated from METAR indications have airport-wide application, regardless of airport size; • The precipitation rate reported in METARs (as light, moder- ate, or heavy) is not correlated with the liquid water equiva- lent (LWE) used during fluid testing to establish HOT guide- lines; and • Pilots must use subjective judgment when using METAR indications or when using personally estimated visibility distance in conjunction with HOT Guidelines to establish a HOT value. 52 Table 35. Comparison of HOT generated from different pilot aids. HOTDS Test # Daylight/ Darkness Visibility (Statute Miles) OAT (°C) Fluid Analyzed METAR Visibility Report and Snow Visibility Chart METAR Snow Intensity Report Site 1 HOT (min) Site 2 HOT (min) 95 Darkness 3/4 0 no HOT 40 - 90 86 104 107 Daylight 1/2 -4 Octagon MaxFlo 100/0 25 - 60 25 - 60 36 47 97 Darkness 1/2 0 no HOT 30 - 55 93 120 123 Darkness 3 -8 ABC-S 75/25 25 - 50 25 - 50 79 112 Table 36. Range in HOTS. Range in HOTS (minutes) Fluid Analyzed Test # From METAR From Test Data Octagon MaxFlo 100/0 95 50 18 Octagon MaxFlo 100/0 107 35 11 ABC-S 75/25 97 25 27 ABC-S 75/25 123 25 33

The availability of accurate information on rate of precip- itation, along with true indications of temperature and pre- cipitation type, is the key to the generation of reliable HOT values. The current use of METAR indications and subjective assessments of weather conditions does not take full advan- tage of the accuracy and consistency provided by the scien- tific approach used to generate HOT Guidelines. In contrast, the HOTDS measures actual precipitation (LWE). These data are used along with temperature, precip- itation type, and the regression curves and coefficients gener- ated during the fluid endurance testing, to generate HOT val- ues. Subjectivity is removed and the complete process is scientifically based. In addition, HOT values can be updated every 10 minutes. Implementation of a single HOTDS system at any airport, regardless of size, may potentially produce HOT values superior to those now generated through the use of METAR indications. Conclusions and Recommendations This task of ACRP Project 10-01 was conducted to deter- mine if a single location precipitation sensor can reliably re- port precipitation intensity for an entire airport. The conclu- sions and recommendations resulting from this task are presented in this section. Conclusions Test Methodology The approach to collecting test data was effective, and the data provided a suitable base for comparing HOTs generated from two separate test sites at an airport. The test methodol- ogy developed and applied in the collection of data proved satisfactory. The repeatability of precipitation rates measured amongst the four samples collected at each site proved to be better than for rate collection during fluid endurance time tests. Two sets of analysis were conducted. One was based on the data as collected (base case) and the second was based on the precipitation rate data adjusted by the CAR exemption con- ditions. In each case, HOTs were calculated for a selection of currently active fluids at specific strengths. Operational Significance of Between-Site Differences The extent of the between-site difference in HOT and its level of impact on the operation varied greatly. Examination of the absolute size of between-site differences led to the conclusion that between-site differences in holdover times on the order of 20 to 30% are of potential operational interest, and between-site differences greater than 30% are operationally significant. Base Case Examination of Between-Site Differences For all fluids examined, there was no statistical difference in HOTs for the two sites in about 40% of the data sets collected. For all fluids examined, there was no statistical difference in HOT values in approximately 40% of the data sets col- lected. Between-site differences in HOT values varied by fluid type and fluid strength: • For thickened fluids at full strength and in a 75/25 mix, between-site HOT differences greater than 30% were seen 5 to 7% of the time and differences greater than 50% were seen 2 to 3% of the time. • For Type I fluid, between-site HOT differences greater than 30% were seen 11% of the time and differences greater than 50% were seen 5% of the time. • For the 50/50 fluid strength case, between-site HOT differ- ences were larger than for the other fluids, with HOT dif- ferences greater than 20% about 29% of the time, greater than 30% about 19% of the time, and greater than 50% about 14% of the time. CAR Exemption Case Examination of Between-Site Differences In comparison to the base case, there was a decrease in the frequency of between-site differences in the range of 20 to 30% and an increase in the range of 30 to 50% when looking at HOTs using the CARs exemption conditions. A major reason for this shift was the stepped augmentation of measured rates in accordance with the CARs exemption. In the case of the Octagon fluid, for example, of the 18 data set pairs falling in the 30 to less than 50% difference range, 10 ex- perienced a differential in augmentation, where the measured rates of one site were slightly below 10 g/dm2/h and thus were augmented by 6 g/dm2/h, while the rates of the other site were over 10 g/dm2/h and thus were augmented by 14 g/dm2/h. Examination of Site Separation Distance Sorting the base case data into three separation-distance ranges showed a distinct relationship between site distance and HOT difference. The longest separation distances showed a considerably higher frequency of occurrence of large between-site differences in HOT. The frequency of tests generating a between-site difference greater than 20% varied by shortest distance separation, mid- range, and longest distance as shown in Table 37 (note that lake-effect data has been removed). 53

Examination of Lake-Effect Snowfall on HOT Differences The lake-effect data, collected at a between-site distance of 8,300 ft, was compared to the other data collected at the mid- range distance. The frequency of cases where the between-site difference in HOT was 30% or more of the lower site value was substantially greater for the lake-effect data. Much of the increase showed up in the > 50% difference category. Comparison of HOTDS Results to Current Operational Practices There is considerable variance in the snow intensity de- rived from METAR sources and test data. The METAR report gives the pilot two alternative ways to establish a value for snow intensity. METAR reports retrieved for selected periods of testing gave conflicting intensities for the two alternatives, such as heavy and light snow. In some cases, the corresponding intensity from collected data was moderate. The variability in snow intensity indications leads to large differences in HOT. In some cases, the METAR visibility and snow visibility charts led to no HOT availability, while the test data produced operationally valuable holdover times. The lower HOT from the two test sites generally was longer than the HOT value derived from either alternative using METAR reports. These results suggest that a single HOTDS installation may be able to produce HOT values superior to those now gener- ated through the use of METAR indications, despite the vari- ance in precipitation over the airfield. General Conclusion In general, differences in between-site HOTs for snow can be significant to the operation, and they are a function of dis- tance. The extent of the differences can be worsened by lake- effect snowfall. Differences in HOT generated from different sites begin to impact the operation when the sites are separated by mid- range distances (7,017 to 13,390 ft), and have a definite im- pact at long separation distances (27,800 to 28,500 ft). The finding of variances in precipitation rate and HOT over a large airport should not be a consideration or obstacle to further development of the HOTDS over the short term. Recommendations In the short term, the finding of variances in precipitation rate and HOT over a large airport should not be a consideration or obstacle to further development of the HOTDS. This condi- tion should be considered only in the further development and application of the HOTDS systems for large airports where the taxi distance from deicing locations to the assigned departure runway can be very long. Smaller airports with shorter taxi dis- tances in the order of 5,000 ft are not affected. A possible solu- tion may be to compare the accuracy in HOTs generated from current processes to HOTs generated from a single HOTDS in- stallation at a large airport. If the single installation HOTDS is more accurate than the current processes, then the single instal- lation HOTDS may be deemed adequate. In the longer term, a study should be conducted to com- pare the accuracy in HOT generated from a single HOTDS installation at a large airport to the accuracy associated with HOT values generated from current processes using METAR indications and pilot assessments. Two approaches may be considered: 1. Install more than one HOTDS system, with the actual num- ber being dependent on each airport’s layout and geography. This approach ultimately leads to questions as to where and how many systems need to be installed, and subsequently how the different indications should be interpreted: 54 Table 37. Frequency of tests generating between-site differences > 20%. Frequency (%) at Separation Distance Fluid Shortest (4,167-5,052 ft) Mid-Range (7,017-13,390 ft) Longest (27,800-28,500 ft) Type I 4 12 15 Clariant 2012 100/0 1 10 21 Octagon MaxFlo 100/0 2 12 26 Kilfrost ABC-S 75/25 1 14 28 Kilfrost ABC-S 50/50 0 38 42

• Should the average of all sites be used? • Should only the value from the installed site nearest the entrance to the departure runway be used? • Should the lowest HOT value be used? 2. Develop a correction factor-of-safety rule to be applied to indications generated from a single airport system. It may be necessary to develop an appropriate correction factor for each individual airport to address its unique size, run- way layout, and type of winter precipitation expected. A guideline to gathering and collecting information neces- sary for the development of a local correction rule for a single HOTDS system would be required. Additional data on the relationship between separation distances and rate of winter precipitation of all types, and on lake-effect snowfall, would be needed. References 1. Exemption from Subsection 602.11(4) of the Canadian Aviation Regulations and Sections 1.0, 3.0, 6.0, 6.2, 6.3 and 7.1.1.1 of Standard 622.11 Ground Icing Operations. 2. Bendickson, S., Regression Coefficients Used To Develop Aircraft Ground Deicing Holdover Time Tables: Winter 2007–08, APS Aviation Inc., Transportation Development Centre, Montreal, December 2007, TP14782E. 55

Next: Chapter 4 - Increased Use of Spot Deicing for Aircraft Frost Removal »
Optimizing the Use of Aircraft Deicing and Anti-Icing Fluids Get This Book
×
MyNAP members save 10% online.
Login or Register to save!
Download Free PDF

TRB’s Airport Cooperative Research Program (ACRP) has updated Report 45: Optimizing the Use of Aircraft Deicing and Anti-Icing Fluids provides guidance on procedures and technologies designed to help reduce the use of aircraft deicing and anti-icing fluids (ADAF) while maintaining safe aircraft operations across the wide range of winter weather conditions found in the United States and Canada.

The report includes a series of best management practices that have the potential to be immediately implemented, and highlights the detailed findings and recommendations of experiments to evaluate holdover time determination systems, spot deicing for aircraft frost removal, and ADAF dilutions.

In 2016, the 16 Fact Sheets were reviewed to assess if they reflected current technologies and practices in the industry. That review resulted in updates to Fact Sheets 45, 55, and 56, and the creation of a new Fact Sheet 112. describing promising technologies and procedures from Chapter 2, in the form of readily implementable best management practices.

  1. ×

    Welcome to OpenBook!

    You're looking at OpenBook, NAP.edu's online reading room since 1999. Based on feedback from you, our users, we've made some improvements that make it easier than ever to read thousands of publications on our website.

    Do you want to take a quick tour of the OpenBook's features?

    No Thanks Take a Tour »
  2. ×

    Show this book's table of contents, where you can jump to any chapter by name.

    « Back Next »
  3. ×

    ...or use these buttons to go back to the previous chapter or skip to the next one.

    « Back Next »
  4. ×

    Jump up to the previous page or down to the next one. Also, you can type in a page number and press Enter to go directly to that page in the book.

    « Back Next »
  5. ×

    To search the entire text of this book, type in your search term here and press Enter.

    « Back Next »
  6. ×

    Share a link to this book page on your preferred social network or via email.

    « Back Next »
  7. ×

    View our suggested citation for this chapter.

    « Back Next »
  8. ×

    Ready to take your reading offline? Click here to buy this book in print or download it as a free PDF, if available.

    « Back Next »
Stay Connected!