Schedule

Subscribe to RSS Feed

2019
Friday, February 22nd
9:00 AM

Homomorphisms and cores of random digraphs

Esmaeil Parsa, University of Montana, Missoula

UC 333

9:00 AM - 9:15 AM

An acyclic coloring of the vertices of a digraph D is an assignment of different colors to the vertices of D such that the vertices of the same color induce an acyclic subdigraph of D. If an acyclic coloring of D uses k distinct colors, it is called an acyclic k-coloring of D. The minimim number k for which there is an acyclic k-coloring of D is called the acyclic chromatic number of D. It is easy to see that the acyclic chromatic number od D is equal to k if and only if we can define a mapping f from V(D) to V(K_k) such that f maps an arc either to a vertex or to an arc of K_k with direction preserved and the inverse image of every vertex of K_k induces an acyclic subdigraph of D. A mapping f:V(D)---> V(C) with the forementioned properties is called an acyclic homomorphism of D to C. If there is an acyclic homomorphism of D to C, we say D is C-colorable. A one-one and onto homomorphism is called an isomorphism and an isomorphism of D to itself is called an automorphism. A digraph D is called a core if the only homomorphism of D to itself is an automorphism. Complete digraphs are examples of cores.

Cores play a very important role in the study of digraph homomorphisms. A very first question about cores is that ``how big is the set of core digraphs?''. In this talk we want to prove that assymptotically almost surely every random digraph is a core. Loosely speaking this means that if we have a bag containing all digraphs on n vertices and we randomly choose a digraph from the bag, the probability that the outcome is a core digraph tends to 1 as n goes to infinity.

9:20 AM

High Dimensional Outlier Detection

Omid Khormali

UC 333

9:20 AM - 9:35 AM

In statistics and data science, the outliers are the data points that differ greatly from other values in a data set. They are important when looking at the large data set because they can sometimes effect on perceiving the whole data. It is therefore very important to detect and adequately deal with outliers. Recently, in [V. Menon and S. Kalyani, Structured and Unstructured Outlier Identification for Robust PCA: A Non iterative, Parameter free Algorithm, arXiv:1809.04445v1], a novel algorithm for detecting outliers is presented which a) does not require the knowledge of outlier fraction, b) does not require the knowledge of the dimension of the underlying subspace, c) is computationally simple and fast d) can handle structured and unstructured outliers. In this research, we improved this algorithm by reducing its complexity from O(n2m) to O((n/log(n))2m) where n is the number of data points and m is the dimension of the space.

9:40 AM

The Alphabet Projection of Mass Spectrometry Data

Patrick Kreitzberg

9:40 AM - 9:55 AM

My presentation will be about finding small molecules in mass spectrometry (MS) data. There is a wide breadth of future applications for this technique but the most impactful may be in drug testing. This method can be used to find what a drug has metabolized into (it could be a harmful poison or a therapeutic chemical) after it has interacted with a patient's physiology. MS is a technique used to find the mass of objects (e.g. molecules and amino acids) which are too small to be weighed through conventional means. The data is measured in mass vs intensity; intensity can be thought of as the abundance of the mass in the sample. The data looks like a series of peaks where a peak is present if there is a mass found at that value and the height is proportional to the intensity of that mass. We use the mass difference between peaks to find molecules that are either too small to be found by MS or have disappeared from the sample before the MS process began. We identify a set of the most important mass differentials, which we call an alphabet, that connect many masses in the MS data. There are methods which currently use an already known alphabet to connect a graph, but such an alphabet may be so large as to be unusable for certain data sets such as urine analysis. We are the first to propose a method which finds such an alphabet without any knowledge of the data a priori.

In order to find the most important masses in the MS data we represent the masses as vertices in a graph. We connect the two vertices if there is a mass in the alphabet equal to the difference between the two vertices. The larger the graph it builds the more important the masses in the alphabet are. This comes from the idea that chain reactions are important. If many masses all lose a sugar mass than the sugar must be important to the sample. If those smaller masses then lose a water molecule, water and sugar combined are important. From the MS data millions of mass differentials may be calculated, we project these millions down to the most important ones. Usually we project down to between 32 and 128.

The alphabets from which we build the graphs are determined randomly. Each mass differential is picked by choosing two random masses and taking the difference between them. Then we have a model which calculates what we think the quality of the graph(s) made by the alphabet are. If the quality of a set of graphs produced by one alphabet is better than produced by another, we keep the first. After proposing random masses, building the graphs, and then accepting the best alphabets many times the best alphabet will begin to converge to a final answer, meaning we can not find an alphabet which produces better graphs.

As stated above the most significant application may be in the field of drug testing. But this method can be used anytime you are unsure of what may be contained in a sample. The TSA can use this method to find which potential bomb-making chemicals to look for. This would be done by making a bomb, take MS data then use our method to find which chemicals are prevalent in the sample. Another biological application may be in diagnosis. If we take a urine sample from a patient there may be a molecule which shows up in our alphabet that can determine whether the patient is diabetic, experiencing kidney failure, is pregnant, etc.

p { margin-bottom: 0.1in; line-height: 120%; }

10:00 AM

Decreasing error associated with calculations of freshwater pCO2 using more accurate pH measurements

Fischer Young

UC 333

10:00 AM - 10:15 AM

As atmospheric CO2 continues to rise, understanding how CO2 cycles through the environment is of continuing importance. Although terrestrial and oceanic CO2 dynamics have been studied for decades, freshwater CO2 dynamics have not been studied as intensively. Increasing CO2 partial pressure (pCO2) in freshwaters indicates that increasing atmospheric CO2 is impacting freshwaters as well. However, calculating freshwater pCO2 accurately is not well established. Thus, it is necessary to establish an accurate method for calculating freshwater pCO2 to understand how increasing atmospheric CO2 is impacting freshwater systems.

Due to error in the pH measurement, large overestimations in pCO2 calculations have been demonstrated by previous studies. This study will examine methods to minimize the error from pH and total alkalinity and thus provide more accurate and precise calculated pCO2 values. A laboratory controlled mixing tank with a pCO2 infrared analyzer will act as the true value used for comparisons of this study. pH will be determined spectrophotometrically using purified meta-cresol purple as the indicator. This method is extremely popular for ocean water analysis and has been adapted for freshwater analysis for this study. In addition, glass pH electrode measurements were also taken in order to compare the error associated with each method when calculating pCO2. An edited MATLAB version of CO2sys, a carbonate system modelling tool, that includes a correction for freshwater ionic strength will be utilized for the indirect calculations. Preliminary findings indicate that using the ionic strength correction for freshwater and spectrophotometric pH instead of a glass pH electrode resulted in more accurate and precise pCO2 calculations. The spectrophotometric pH pCO2 values gave a reproducibility of ±10 μatm whereas the glass pH electrode pCO2 values gave a reproducibility of ±100 μatm for multiple samples. It is increasingly important to be able to obtain accurate and precise freshwater pCO2 values in order to designate a freshwater system as either a system that takes up CO2 (sink) or expels CO2 (source) from or to the atmosphere. Due to poor pH accuracy and the overestimation of calculated pCO2, it is likely that some freshwater systems have been incorrectly designated as sources of pCO2 to the atmosphere. Freshwater pCO2 is a small but important key to providing accurate global carbon budgets. It is the goal of this study to improve the accuracy and precision of calculating pCO2 of freshwater systems using pH data and to determine the importance freshwater systems have in global carbon budgets.

10:20 AM

Recreation Impacts: A Case Study of the Sawtooth Wilderness

Chelsea Phillippe

UC 333

10:20 AM - 10:35 AM

Outdoor recreation provides an opportunity for people to connect with nature, improve mental and physical health, and to support local economies. The Outdoor Industry Association reported 13.6 million recreationists in 2017 (1.3 million more than 2016) equating to nearly half of the US population participating in outdoor activities. Federal land managers utilize education, regulation, and law enforcement to protect our natural resources from negative impacts – including those from recreation and tourism. However, there is limited evidence about the effectiveness of these strategies (e.g. Leave No Trace) over long time frames. One area of particular concern is federally designated Wilderness where there is a tension between preservation and human use. By exploring emerging and existing trends in wilderness recreation federal land managers may better develop policy for sustainable tourism and recreation.

The Sawtooth Wilderness in central Idaho provides a unique case study to explore effectiveness of intervention strategies aimed at reducing negative recreation impacts. Record keeping from campsite monitoring, beginning in the 1990s, locates and measures recreation-related impacts on the ground. Twenty years of required wilderness permits, necessary for both day and overnight use, provides insight into user trends; where visitors go, how long they stay, and modes of travel. Supplementary educational reports from the Wildlands Education Plan, interviews with locals and education practitioners, along with educator numbers from the national Leave No Trace campaign provide insight into wilderness education efforts. The Theory of Planned Behavior guides this research to build a better appreciation of user trends, impacts, and education.

The Sawtooth Wilderness Case Study is timely as it focuses on a wilderness at a pivotal turning point in its management and regulation policies. Its location in central Idaho provides a buffered distance from high use and impacts, but as the surrounding communities grow, so do their impacts. In 2017 the United States Census Bureau announced Idaho as the fastest-growing state in the nation, while its capital Boise, was named Forbes Magazine No. 2 spot for young professionals. As new Boise residents discover unique opportunities to escape day-to-day stresses in the serene mountain landscapes of the Sawtooths their impacts will likely compound with existing users. Land managers are challenged to balance these recreation impacts while safeguarding wilderness to maintain pristine and wild conditions.

Preliminary data analysis suggests that use is concentrated in a few select campsite areas and that user trends have changed over the last few decades. Initial Key Informant interviews indicate that the Sawtooth Wildlands Education Plan has been successful in reducing recreation impacts including those at campsites.

10:40 AM

Lessons from 45 Years of Wilderness Fire Management in the Northern Rockies

Julia Berkey

UC 333

10:40 AM - 10:55 AM

Wildfire management is a hotly debated subject in the western United States. When it comes to forest fires and fuel loads, everyone from the president down to the local homeowner has their own opinion on the best management practices. For the large part of the last century, fire management in the American West has defaulted to aggressive suppression of all fire ignitions. Where this has been successful, the result has been denser, more homogenized forest stands and increased risk of uncharacteristically severe wildfire due to high fuel loading. In contrast, in the early 1970s a few wilderness areas began to allow some naturally ignited fires to burn. These landscapes now serve as laboratories for the ecological effects of reintroduced fire, as well as the challenges inherent in implementing such a strategy.

This research is aimed at synthesizing the fire management history from three large northern Rockies wilderness areas. A summary of the program, including its challenges and successes, will inform fire managers and policy makers on the history and lessons learned from over 40 years of wilderness fire management. To accomplish this, a literature review was first conducted to compile the existing papers, reports, and interviews related to wilderness fire. In addition, geospatial fire history data dating back to 1889 were collected to analyze the impact of different management strategies on total area burned. Following initial data collection, interviews were conducted with past and current fire managers and policy makers to fill in the persisting data gaps.

Results from the spatial fire history analysis indicate that the percent of total area burned has returned to or surpassed pre-suppression levels for all three wilderness areas. From both the literature and interviews, an emerging theme is that this success depends on management personnel with a strong wilderness ethic and willingness to accept the risks of long-term fire management. We therefore recommend training and incentive programs to cultivate and encourage the ethic and skills crucial to successful backcountry fire management. However, this reliance on personality over policy leaves the fate of the wilderness fire program up to chance. We conclude that management of wildfire as an ecological process would be more likely to occur if policy were written to better support non-suppression tactics. Ultimately, this research will provide information to both the local fire managers and regional and national policy makers on how to better implement and support the wilderness fire program. Such actions taken to strengthen the program will provide exceptional public benefits, such as millions of dollars saved on firefighting costs and ecological restoration of fire-dependent forest ecosystems.

11:00 AM

Differentiation and Screening for Hearing Loss and Cognitive Decline in Occupational Therapy Practice

Taylor Clough

UC 333

11:00 AM - 11:15 AM

After arthritis and hypertension, hearing loss ranks as the third most prevalent chronic health condition impacting the older adult population. Research is currently exploring a possible relationship between cognitive decline and hearing loss. Cognitive decline and hearing loss present with similar signs and symptoms, including repetition or reinstruction requests, frustration, depression, and social withdrawal. Occupational therapists, nationally, are being surveyed regarding knowledge of best practice with patients experiencing hearing loss and dementia. National survey data will be made available in March 2019. However; the Montana clinicians’ data reported concerns about a patient’s cognitive function (100%) and hearing ability (88%). 94% percent had received instruction administering cognitive screening tests, while 85% could interpret results. 91% had not received audiometer operation instructions; 88% could not interpret an audiogram. Audiologists collaborating with occupational therapists could help ensure best practice.

11:20 AM

Employing the art of nanotechnology in wound healing

Zahra Mahdieh

UC 333

11:20 AM - 11:35 AM

The current technology for improved wound healing lags behind the needs and requirements for effective wound dressings even the costs already exceed $50 billion annually (just in the US). An optimal wound dressing has to: 1) serve as a protective barrier on the wound to assist the healing process, 2) require minimal changes to reduce pain and potential for infections, and 3) provide a sustained and controlled release of medications to facilitate wound healing. In order to achieve these goals, research is being conducted to improve wound dressing drug delivery. The major challenge with the current wound dressings is a phenomenon called drug burst release. Drug burst release is a rapid initial release of the drug from a wound dressing resulting in delivery of all the medication at once, hence decreasing the efficiency of a single dressing. The objective of my research is to develop technology resulting in a more prolonged delivery of medications in a format that would be suitable for wound dressings. To achieve this goal would require using nanotechnology in the form of a fiber mat containing an inner core with silver nanoparticles within a shell. The silver nanoparticles serve as an effective anti-bacterial agent. It also requires forming pores in the shell when placed on the wound to allow the silver to be released to the wound. (This structure resembles a garden hose that contains water that cannot be released until holes are punched through the surface allowing the water to slowly seep out.) The major challenges to develop this technology are creation of the shell-core structure and ability to form pores.

I have been able to produce a fiber mat and load the silver nanoparticles (diameter ~20 nm) in the core of the fibers. The core-shell structure of individual fibers was visualized by adding fluorescent dye inside the core. The outer diameter of the fibers is about 1400 nm and the inner diameter is about 800 nm. I have been able to form pores (diameter ~ 170 nm) on the surface of the fibers by using a mixture of a soluble polymer and nanoparticles. The fabricated fiber mat (wound dressing) released 34% of the loaded silver nanoparticles in a period of three days. The obtained results are promising in terms of a prolonged drug release. Furthermore, the antibacterial activity of the fabricated wound dressing was shown to be effective by killing bacteria in less than two hours.

Electrospun fiber mats have a flexible and porous structure with high surface area, which allows for liquid and gas permeability. These characteristics of fiber mats facilitate hemostasis, cell respiration, regulated moisture level, and a suitable bacterial barrier. The resulting wound dressing provides a prolonged drug delivery to the wound site, which decreases the need to frequently change the dressing. Less frequent dressing changes will result in a decrease in patients’ discomfort, care costs, further tissue damage, and the risk of infections. Funding: P30GM103338.

11:40 AM

Associations between meaningful activity and social closeness with well-being for persons with disabilities

Ari Silverman, University of Montana, Missoula

UC 333

11:40 AM - 11:55 AM

Persons with disabilities (PWD) experience high rates of physical and mental challenges, high levels of depression and suicidality, and are considered a health disparity population. PWD often lack opportunities to fully participate in meaningful experiences in their communities, resulting in feelings of social isolation or loneliness. Improving well-being is paramount to enhancing the health status of PWD. Two approaches that have demonstrated promise in increasing long-term well-being in the general population are 1) engagement in meaningful activity and 2) experiences of social closeness. This study examined whether these two approaches were associated with greater immediate well-being for PWD and those without disabilities.

Meaningful activities have been defined as subjective experiences that provide an opportunity for completion of important tasks, and foster a sense of feeling valued, in-control, and socially connected with others. To understand the role of meaningful activity in well-being, we asked the question: is participation in meaningful activities associated with greater well-being? Social closeness is a person’s perception of their degree of embeddedness in a social network. We analyzed the link between social closeness and well-being by asking the question: is participation in activities with socially close others (operationalized as family members, household members, or friends) associated with greater well-being? Finally, we ascertained what particular activities and which types of relationships were related with the greatest happiness and meaning.

To answer our questions, we used data from the 2010, 2012, and 2013 Well-Being Module of the American Time Use Survey (ATUS) conducted by the Bureau of Labor Statistics. Data were collected using a version of the Daily Reconstruction Method, which demonstrates strong validity for determining variations in affective state during the course of a normal day. Participants (PWD = 4,079, persons without disabilities = 30,486) included a large and representative sample of the non-institutionalized U.S. population age 15+. Predictor variables were the meaningfulness of the activity and the presence of various types of relationship categories during the activity. Outcome variables were assessed by a 7-point likert scale, and included 1) happiness and 2) the average of the negative well-being scales of pain, sadness, stress, and fatigue. Meaningfulness was used as an outcome variable to determine which relationships and activities were most meaningful.

To our knowledge, this study is the first of its kind to demonstrate that meaningful activities are associated with greater immediate happiness and less negative well-being for PWD and persons without disabilities. PWD especially benefit from activities that foster a sense of mastery, such as household activities, and that strengthen self-efficacy, such as participation in government services and civic obligations. Importantly, our findings are the first to indicate the value of being with socially close others regardless of the extent of interpersonal interaction. Merely being in the presence of socially close others is associated with greater immediate happiness.

Our findings highlight the utility of incorporating ample opportunities for social closeness and meaningful activities that allow for mastery and self-efficacy in interventions aimed at improving well-being for PWD. Increasing daily experiences of social closeness and meaning may result in improved health outcomes and greater quality of life for this marginalized population.

1:30 PM

Using RNA Sequencing to Observe Biased Agonism in PPAR&gamma

Mariah Rayl

UC 333

1:30 PM - 1:45 PM

Diabetes currently affects 9.4% of the United States’ population, costing approximately $245 billion dollars both direct and indirect costs. Additionally, 33.9% of the population has prediabetes which can progress to diabetes without adequate preventative measures. Between 90% and 95% of the diabetes cases are specifically type two diabetes (T2D). A major hallmark of T2D is that it reduces the cellular response to insulin. Due to the increasing prevalence of T2D in the United States, medications such as rosiglitazone and pioglitazone have become increasingly important in order to manage the condition, but like many other medications, they come with side effects. Ideally, the downstream effects of medication can be controlled by creating medications that have less side effects.

Rosiglitazone and pioglitazone are FDA approved drugs used for insulin sensitization in T2D and act through binding of peroxisome proliferator-activated receptor γ (PPARγ). PPARγ changes cellular transcription based on which type of drug binds to it. When agonists bind, PPARγ activity increases and when antagonists bind, PPARγ activity decreases. Unfortunately, PPARγ affects a wide range of biological processes so drug treatment causes a wide range of side effects. These side effects include organ failure, weight gain, fluid retention, bone fractures, and more.

The classic model of PPARγ implies that when PPARγ is treated with an agonist, it recruits a coactivator protein thereby activating transcription, but when treated with an antagonist, PPARγ recruits a corepressor protein thereby repressing transcription. More recent data indicates that slight structural changes in PPARγ due to drug treatment affect the affinity for various coactivators in more ways than two, making the classic model inadequate. We propose a model of biased agonism in which differential recruitment of coactivators lead to differential sets of genes transcribed. If the gene sets can be determined and drugs can be designed to better select one gene set over another, side effects may be reduced or eliminated by exploiting a drug’s bias toward a certain gene set.

Because PPARγ directly affects transcription, we developed an mRNA sequencing experiment in order to determine exactly which genes PPARγ binding drugs affect in isolated human adipose cells. We observed that the drug GW1929 affects these cells differently than the drug rosiglitazone even though they are both agonists. GW1929 could be displaying biased agonism in that it preferentially recruits one coactivator over others and leads to a distinct transcriptional pattern that is not observed in the other drugs.

1:50 PM

Biased Drugs: The many ways to turn “on” a receptor

Michelle Nemetchek

UC 333

1:50 PM - 2:05 PM

Type II Diabetes, formerly known as, “adult onset,” diabetes, is a disease that affects almost 10% of Americans and is expected that 40% of Americans will develop Type II Diabetes in their lifetime. Type II differs from Type I diabetes in that it is developed later in life, and though it is heavily dependent on diet and exercise, it also has a large genetic dependence as well. Patients with Type II often produce insulin, but their bodies have become resistant to it, preventing its normal function as a signal to take up glucose from the bloodstream. This ultimately leads to cells becoming starved, and if not treated, can dramatically increase the chances of patients developing heart disease and nerve damage.

A target of many Type II Diabetes drugs currently prescribed in the US is a receptor called PPARγ. These drugs desensitize the body to insulin, allowing it to be recognized as a signal for glucose uptake. Citing side effects like organ failure and weakened bones, other countries have chosen to take these prescription drugs off of the market. In the US, some drugs that target PPARγ have been taken off the market completely, and others have previously had a “black box” warning of potential side effects. Though some of these drugs are currently prescribed, there is still much room for improvement.

Here, we demonstrate that this receptor could be precisely controlled to turn, “on,” some genes but not others when the receptor is bound to certain drugs. Some drugs control genes by making the receptor interact with the cell’s transcriptional machinery; Other drugs encourage interaction of this receptor with a partner receptor; Still, others may affect the DNA sequences that the receptor recognizes. To establish these drug effects, in vitro protein interaction experiments using fluorescent probes are used to measure interactions between the receptor, transcriptional machinery partners, and DNA. All in all, we see that the binding of certain drugs encourages different interactions between the receptor and these partners. These, “biased,” drugs have the potential to turn on the genes that lead to antidiabetic effects while avoiding the activation of genes that lead to undesirable side effects.

2:10 PM

Isoxazolo[3,4-d]pyridazinones positively modulate the metabotropic glutamate subtypes 2 and 4

Christina Gates, University of Montana - Missoula

UC 333

2:10 PM - 2:25 PM

The seven transmembrane(7TM) superfamily, also known as G-protein coupled receptors (GPCR), is one of the largest superfamilies in the human genome. With approximately 30% of marketed drugs targeting the GPCRs, these proteins are among the most successful as therapeutic targets. Within the GPCR receptor family there is a subgroup called the metabotropic glutamate receptors (mGluR). Unlike other GPCRs, mGluRs bind the endogenous ligand glutamate via a large Venus flytrap domain (VFT) to produce a cellular response. There are 8 subtypes within the class of mGluRs (1-8) and are grouped by amino acid sequence similarities Depending on the type of compounds that bind and the mGluR subtype, a different cellular response will result. Compounds that target mGluRs are important for the treatment of a variety of central nervous system (CNS) disorders, as well as cancer. Selectively targeting the VFT domain is difficult due to its high similarity throughout the mGluRs. This difficulty can be overcome by targeting another regulatory region which is located in the 7TM, known as the allosteric site. This presents a more selective target due to less sequence similarity between the mGluR subtypes here, this could lead to fewer off target activity. Our isoxazolo[3,4-d]pyridazinones [3,4-d] compounds were tested and found to have selective activity at mGluR 2 and 4. This selectivity, along with other tests, imply binding may not be at the VFT, but rather at the allosteric site as positive allosteric modulators (PAMs), leading to the selective activity. The mGluR2 subtype is a target for treatment of anxiety and schizophrenia, and successful activation may help to alleviate them. Activation of mGluR4 helps to ease the symptoms of Parkinson’s disease and may even slow progress of the disease. Additionally, both of these receptors have been implicated in the treatment of variety of cancers of the brain and other organs systems, such as giloma, medulloblastoma, or colorectal carcinoma, presenting another target to overcome these diseases. Further modifications of our compounds will be developed to optimize selectivity and activity, by working off a hypothesis based on structural binding to the allosteric site. Our progress on the new synthesis and biological evaluation will be presented here along with future work.

2:30 PM

Intraspecific variation in plant response to drought: assessing the current state of knowledge

Mariah McIntosh, University of Montana

UC 333

2:30 PM - 2:45 PM

Because plant growth, fitness, and survival strongly depend on water availability, it is imperative that plants are able to cope with water stress when drought occurs. Therefore, understanding plant adaptation to drought has major implications in basic and applied science, from informing fundamental ecological theory explaining plant distributions and interactions, to predicting and modeling local to global responses to climate change, to implementing effective ecological restoration. Research shows that plants vary greatly in their response to drought, with significant differences in sensitivity to drought and drought coping strategies. However, current work primarily focuses on differences between species or distantly related functional groups (interspecific variation), while little is known about variation in drought response between populations within the same species (intraspecific variation). How this variation is structured between and within species remains a fundamental question in plant ecophysiology. A formal assessment of literature considering intraspecific variation in plant drought adaptation across ecosystems, plant taxa, functional groups, life history strategies, or drought response traits has not yet been made. Therefore, the extent of variation between versus within species is unknown, and the current state of this field may poorly inform applications in ecology, crop science, climate science, ecological restoration, and more.

To characterize what is currently known about within-species variation in drought adaptation, I conducted a literature review assessing intraspecific variation in natural (non-crop) plant populations. Analyzing all publications hosted on Web of Science pertaining to this topic, I asked how intraspecific variation is structured within the literature across ecosystem types, geographical regions, plant taxa, functional groups, life history strategies, and drought response traits. I assessed if and when there are differences between populations of the same species in adaptation to drought, and, in the case of some studies, whether or not this variation is genetic, plastic, or both. I also considered which ecosystem types, geographical regions, plant taxa, functional groups, drought response traits, and experiment types are most represented in the literature. The results of this review characterize the current state of knowledge of intraspecific variation in plant drought adaptation and identify areas most in need of further research, specifically the need for multi-trait, multi-species, rangewide studies assessing drought adaptation traits. Ultimately, this review will aid in assessing current ecological theory pertaining to plant adaptation to drought and inform applied applications, from predicting forest mortality under climate change, to improving the efficacy of restoration projects in drought-prone areas, to informing future research.

2:50 PM

Tree spatial patterns modulate peak snow accumulation and snow disappearance

Eryn Schneider

UC 333

2:50 PM - 3:05 PM

Forests and snow covered regions frequently co-occur across the northern hemisphere. In these environments, forests are structurally and spatially complex mosaics of tree neighborhoods that are intrinsically linked to ecosystem functions. Tree and canopy structures influence snow accumulation and disappearance processes through interception and radiation attenuation. However, it is unclear to what extent if spatial heterogeneity within the forest canopy induces heterogeneity in snow accumulation and persistence. Using a forest-based approach, we identified and tested the differential effects of within-forest neighborhoods on snow processes. Neighborhood types included individual ponderosa pine (Pinus ponderosa), Douglas-fir (Pseudotsuga menziesii) and western larch (Larix occidentals) trees, tree clumps, openings, and regeneration patches. Neighborhoods were identified within a mixed-conifer forest and paired with intensive measurements of snow accumulation (density and depth) and persistence. Overall, neighborhood type and year had a significant effect on accumulation and snow disappearance. Openings were significantly different from clumps and individuals, always accumulating more snow. Openings retained snow significantly later than clumps but were not significantly different from individuals. Within the individual tree neighborhood, a nested species effect indicated no differences in accumulation but significant differences in disappearance between deciduous and evergreen conifers, with snow persisting longer beneath deciduous western larch. Our results suggest that canopy interception is the primary mechanism driving the accumulation phase, while snow disappearance patterns are a consequence of increased longwave radiation. Reducing canopy interception and longwave radiation by creating widely spaced single trees and small openings will increase snow depth and duration and thus water yield, while maintaining a heterogeneous canopy structure that includes tree clumps can be used to meet multiple objectives including diverse wildlife habitat, timing of green-up, and plant biodiversity.

3:10 PM

Total energy intake and self-selected macronutrient distribution during wildland fire suppression

Alex Marks

UC 333

3:10 PM - 3:25 PM

INTRODUCTION: Wildland firefighters (WLFF) are required to work long hours in extreme environments resulting in high daily rates of total energy expenditure (TEE) (Ruby, 2002; Cuddy, 2015). Increasing the number of eating episodes throughout the work shift and/or providing rations that better promote convenient nutrient delivery (Cuddy, 2007; Montain, 2008) has been shown to augment self-selected work output on the fireline. Regular consumption of supplemental carbohydrate (CHO) has also demonstrated enhanced work output, particularly during the shifts’ latter hours (Cuddy, 2007). However, it remains unclear how current feeding strategies of WLFF compare to more frequent nutrient delivery. PURPOSE: The aim of the current study was to determine the self-selected field total energy intake (TEI), composition and patterns of WLFF feeding during wildland fire suppression shifts. METHODS: 86 WLFF (16 female, 70 male; 27.5±6.4 yrs) were deployed to 12 different wildland fire assignments across six regions of the US during the 2018 fire season. Pre- and post-shift food inventories were collected at WLFF basecamp and provided item-specific nutrient content (calories [kcal], CHO, fat, protein). Work shift nutrient consumption (TEI, feeding frequency [total number of and interval between feeding episodes], feeding episodic composition) was monitored in real-time by field researchers on the fireline via observational data capture in mobile tablets. RESULTS: Work shift length averaged 14.0±1.2 hr, with a TEI of 1494.3±592 kcal (51±10, 37±8, 14±4 % for CHO, fat, and protein, respectively). The total number of eating episodes was 4.3±1.7 with an average interval of 117±76 min. Eating episodes averaged 344±307 kcal and included 44±38 g CHO. Using similar intake metrics, TEI was 893±353and 1356±560 kcal for breakfast and dinner, respectively. CONCLUSION: The present work shift TEI approximates 34% of the TEE compared to our prior doubly labeled water studies (Ruby, 2002; Cuddy 2015). These data also demonstrate that WLFF consumption patterns using current rations may not deliver adequate nutrients for the occupational demands of WLFF. Future work should elucidate the impact of work shift provisions on overall patterns of self-selected work output.