Cover Image

Not for Sale



View/Hide Left Panel
Click for next page ( 86


The National Academies | 500 Fifth St. N.W. | Washington, D.C. 20001
Copyright © National Academy of Sciences. All rights reserved.
Terms of Use and Privacy Statement



Below are the first 10 and last 10 pages of uncorrected machine-read text (when available) of this chapter, followed by the top 30 algorithmically extracted key phrases from the chapter as a whole.
Intended to provide our own search engines and external engines with highly rich, chapter-representative searchable text on the opening pages of each chapter. Because it is UNCORRECTED material, please consider the following text as a useful but insufficient proxy for the authoritative book pages.

Do not use for reproduction, copying, pasting, or reading; exclusively for search engines.

OCR for page 85
74 I N N O VAT I O N S I N T R AV E L D E M A N D M O D E L I N G , V O L U M E 2 ties, and expected values of attributes of the transporta- uted across alternatives and, hence, the higher the ran- tion and land use system. domness and vice versa. More than one location may be The location choice set consists of all locations known added to the choice set in a given time step. A new loca- by the individual. "Known" in this context means that tion has priority over known locations in location the agent knows not only the physical location but also choice and cannot be removed from the choice set the attributes that are potentially relevant for evaluating before it has been tried once. Once tried, the new loca- utility values for all potential activities. Nevertheless, tion receives a memory-trace strength and is subject to location choice sets are dynamic. Changes follow from the same reinforcement and decay processes that hold processes of knowledge decay, reinforcement, and explo- for memory traces in general. As a consequence of these ration (Arentze and Timmermans 2005b, 2006). The mechanisms, higher-utility locations have a higher prob- strength of a memory trace of a particular item in the ability of being chosen, for three reasons: (a) they have choice set is modeled as follows: a higher probability of being discovered; (b) if discov- ered, they have a higher probability of being chosen, t t W + Ui if Iit = 1 and (c) if chosen, they are more strongly reinforced. At Wit +1 = i t (6) the same time, they are not guaranteed of staying in the Wi otherwise where choice set because of two other mechanisms: (a) if the utility decreases due to nonstationarity in the system Wit = strength of the memory trace (awareness) (e.g., the locations do not longer fit in changed sched- of location i at time t; ules), the decay process will ensure that they vanish Iit = 1, if the location was chosen at time t, and from the choice set, and (b) if more attractive locations = 0, otherwise; are discovered, the original locations will be outper- Uit = utility attributed to location i; formed and, therefore, will decay. 0 1 = parameter representing a recency weight; Finally, learning involves updating default settings of and activities, such as duration, start time, transport mode, 0 1 = parameter representing the retention rate. and location. For this updating, each agent keeps a record of the probability distribution across each choice The coefficients and determine the size of reinforce- set. For start time and duration, which are continuous ment and memory retention, respectively, and are param- variables, a reasonable subrange is identified and subdi- eters of the system. vided into n rounded values. For each choice facet, the Exploration, in contrast, is a process by which new following Bayesian method of updating is used: elements can enter the choice set. The probability that a certain location i is added to the choice set in a given Pit M t + 1 time step is modeled as if Iit = 1 Mt + 1 Pit +1 = (9) P(Hit ) = P(G t )P(Hit |G ) t (7) t Pi M t otherwise Mt + 1 where P(Gt) is the probability that the individual decides to explore and P(Hit | Gt) is the probability that location i is discovered during exploration and tried on a next M t +1 = M t + 1 (10) choice occasion. Whereas the former probability is a where parameter of the system to be set by the modeler, the lat- ter probability is modeled as a function of attractiveness Pit = probability of choice i at time t, of the location based on the Boltzman model (Sutton and M = weighted count of the number of times Barton 1998): the choice has been made in the past, Iit = indication of whether i was chosen at time exp(Vit / ) t, and P(Hit | Gt ) = 0 1 = retention rate of past cases. exp(V i i t / ) (8) As implied by Equation 9, more recent cases have a where Vit is the utility of location i according to some higher weight in the update (if < 1), to account for pos- measure and is a parameter determining the degree of sible nonstationarity in the agent's choice behavior. With randomness in the selection of new locations but which the probability distribution of each choice facet at the can also be interpreted as the degree of agent uncer- current time step defined, the default is simply identified tainty (Han and Timmermans 2006). The higher the as the option having the highest probability across the parameter is, the more evenly probabilities are distrib- choice set.