APEC 3611w: Environmental and Natural Resource Economics
  • Course Site
  • Canvas
  1. 5. Earth Systems
  2. 21. Valuation
  • Home
  • Syllabus
  • Assignments
    • Assigment 01
    • Assigment 02
    • Assigment 03
    • Weekly Questions 01
    • Weekly Questions 02
    • Weekly Questions 03
    • Weekly Questions 04
    • Weekly Questions 05
    • Weekly Questions 06
    • Weekly Questions 07
  • Final Project
  • 1. Global Context
    • 1. Introduction
    • 2. The Doughnut
  • 2. Micro Foundations
    • 3. The Microfilling
    • 4. Supply and Demand
    • 5. Surplus and Welfare in Equilibrium
    • 6. Optimal Pollution
  • 3. Market Failure
    • 7. Market Failure
    • 8. Externalities
    • 9. Commons
  • 4. Macro Goals
    • 10. The Whole Economy
    • 11. Sustainable Development
    • 12. GDP and Discounting
    • 13. Inclusive Wealth
    • 14. Fisheries
  • 5. Earth Systems
    • 15. Climate Change
    • 16. Social Cost of Carbon
    • 17. Future Scenarios and SSPs
    • 18. Land Use Change
    • 19. Ecosystem Services
    • 20. Ecosystem Services, Hands-On
    • 21. Valuation
  • 6. Earth-Economy Modeling
    • 22. Earth-Economy Modeling
  • Games and Apps
  • Appendices
    • Appendix 01
    • Appendix 02
    • Appendix 03
    • Appendix 04
    • Appendix 05
    • Appendix 06
    • Appendix 07
    • Appendix 08
    • Appendix 09
    • Appendix 10
    • Appendix 11
    • Appendix 12

On this page

  • Resources
  • Content
    • Valuation of Ecosystem Services
      • Introduction
      • Reading the Landscape: Digital Elevation Models
      • Agenda and Framework for Today’s Lecture
      • Use Value: The Foundation of Economic Valuation
      • Indirect Use Value
      • Option Value: The Bridge Between Use and Non-Use
      • Non-Use Value: Beyond Market Transactions
      • Moving from Types to Methods
    • Methods for Valuing Ecosystem Services
      • Market Valuation Methods
      • Non-Market Valuation Methods
      • Bringing Methods Together with Biophysical Models
    • Administrative Announcements and Course Updates
      • Room Change and Meeting Logistics
      • Guest Lecturer Schedule for the Following Week
      • Personal Travels and Professional Speaking Engagement
      • Instructor Health Update
    • Course Structure and Final Project Overview
      • Introduction to the Final Project Framework
      • Central Bank Perspectives on Environmental Economics
      • Final Project Content and Themes
      • Project Components and Grading Structure
      • Key Deadlines
    • Proposed Changes to Course Grading Structure
      • Rationale for Eliminating the Final Exam
      • Advantages of the Proposed Structure
      • Process for Implementing the Change
    • Final Project Details and Requirements
      • Report Length and Structure
      • Required Content Areas
      • Data Sources and Tools
      • The Course as Foundation for the Project
    • Valuation Methods: Contingent Valuation and Choice Experiments
      • Overview of Contingent Valuation
      • Legal Framework and Methodology
      • Identifying and Quantifying Damages
      • Data Collection Approaches
      • The Problem of Strategic Responses
      • Connection to Choice Experiments
    • Valuation Methods: Benefits Transfer
      • Definition and General Approach
      • Implementation Process
      • The Costanza Example: Valuing Global Ecosystems
      • Problems with Spatial Heterogeneity
      • Legitimate Applications
    • The Reality of Ecosystem Service Valuation in Practice: The Minnesota Lakes Case
      • Complexity Beyond Modeling
      • Research on Water Quality Valuation
      • Framework for Water Quality Valuation
      • Drivers of Water Quality Change in Minnesota Lakes
      • Effects on Different Value Components
      • Valuation Methods for Minnesota Lake Ecosystem Services
    • Introduction to the InVEST Annual Water Yield Model
      • Model Overview and Purpose
      • Defining Water Yield
      • Factors Determining Water Yield
      • Water Inputs and Groundwater Recharge
      • Evaporation and Transpiration
      • Complex Effects of Vegetation on Water Availability
      • Pause for Next Class
    • Introduction and Course Logistics
      • Welcome and Daily Agenda
      • Administrative Announcements Regarding the Final Exam
      • Upcoming Guest Lectures and International Engagement
    • Water Yield Model: Theory and Application
      • Scientific Foundation of Water Yield
      • Connecting Theory to Student Data Collection
      • The Challenge of Real-World Data Organization
      • Data Literacy as a Professional Skill
      • Data Sources and Documentation Structure
      • Country-Specific Considerations and Data Challenges
    • Getting Started with InVEST and QGIS
      • Initial Setup and Workspace Configuration
      • ISO Country Code Conventions
      • Using File Suffixes for Model Iteration
    • Water Yield Model Parameters
      • Precipitation Data
      • Reference Evapotranspiration
      • Root-Restricting Layer Depth
      • Plant Available Water Content
      • Land Use Land Cover Data
      • Biophysical Tables
    • Estimating Model Parameters
      • The Challenge of Parameter Estimation
      • Watershed and Sub-Watershed Data
      • Understanding Watershed Delineation
      • Sub-Watersheds and Hierarchical Thinking
      • Applications to Ecosystem Service Modeling
      • Data Formatting Across Countries
    • Running the Water Yield Model
      • Optional Demand Tables
      • Practical Considerations for the Final Project
      • Processing Time and Data Scale Considerations
    • Interpreting Water Yield Model Results
      • Understanding the Output Structure
      • Visualizing Watershed Results in QGIS
      • Viewing the Full Attribute Table
      • Connecting Biophysical Results to Economic Value
      • Technical Reminders for QGIS Users
      • Exporting Subsets of Spatial Data
      • Working with Attributes in QGIS
    • Valuation of Water Yield: Hydropower Case Study
      • From Biophysical to Economic Values
      • Hydropower Valuation Framework
      • Physics of Hydropower Calculation
      • Economic Calculation of Hydropower Value
      • Policy Relevance of Hydropower Valuation
    • The Value of a Statistical Life
      • Introduction to VSL Concept
      • Hedonic Analysis Refresher
      • Individual Perspectives on Valuing Life
      • Job Market Differentiation and Risk
      • Using Wage Differentials to Estimate VSL
      • Specification and Interpretation of VSL Models
      • Case Study: Environmental Justice and the HERC
      • Values and Range of Statistical Life Estimates
    • Conclusion and Next Steps
      • Class Performance and Data Troubleshooting
    • Welcome to Day 4: Valuation Methods
    • Completing the VSL Framework
      • The Literature on Statistical Life Valuation
      • Contingent Valuation Versus Labor Market Analysis
      • The Labor Market Approach to VSL Estimation
    • A Worked Example: Understanding the VSL Calculation
      • Setting Up the Problem
      • Calculating the Value of a Statistical Life
      • Why This Approach Matters
    • The Abalone Fishing Game: A Classroom Demonstration
      • Setting the Stage for the Experiment
      • The Physical Setup and Game Materials
      • How the Game Works
      • Safety Valves and Risk Mitigation
      • The Three Fishing Areas and Their Risk Profiles
      • Labor Market Theory Applied to the Game
      • Running Round 1: The Safe Fishing Ground
      • Running Round 2: Moderate Risk
      • Running Round 3: High Risk
    • Deriving the Value of a Statistical Life from Game Results
      • Collecting and Organizing the Data
      • The Two-Step Calculation Process
      • Historical Results and Consistency
      • The Hedonic Pricing Framework
    • Policy Applications: Air Quality and Valuation
      • The Epidemiological Foundation
      • Calculating Avoided Mortality and Benefits
      • Beyond Mortality: Measuring Morbidity Benefits
      • Personal Experience and the Severity of Air Pollution
      • Monetizing All Health Benefits
    • Extending VSL Beyond Human Lives
      • The Value of a Statistical Dog Life
      • Why Hedonic Analysis Cannot Be Applied to Dogs
    • Critical Examination of VSL Methodology: Strengths and Limitations
      • The Virtue of Observed Market Behavior
      • The Problem of Perfect Information
      • The Problem of Non-Representative Risk Preferences
      • The Issue of Uncompensated Externalities
      • The Age Question: Differential Valuation Across Lifecourse
      • The Political Controversy Surrounding Differential Valuation
      • Quality-Adjusted Life Years as a Solution
    • Historical Perspectives on Risk and Occupational Hazards
      • Mercury and the Mad Hatters
      • Chimney Sweeps: The Ultimate Occupational Hazard
    • Conclusion: Summary of Key Concepts
  • Transcript (Day 1)
  • Transcript (Day 2)
  • Transcript (Day 3)
  • Transcript (Day 4)
  1. 5. Earth Systems
  2. 21. Valuation

Valuation

Putting a Monetary Value on Ecosystem Services

Resources

Slides 21 - Valuation

Content

Valuation of Ecosystem Services

Introduction

Today’s lecture focuses on the valuation of ecosystem services. Throughout this course, we have spent considerable time on the concept of ecosystem services and have focused heavily on the estimation of the production function side of things. We have utilized the InVEST toolkit to establish where specific ecosystem service values are generated, producing very spatialized information and conducting QGIS work. However, the entire value proposition of ecosystem services to conservation has centered on the idea that it puts a dollar value on ecosystem services. So far, we have not focused significantly on that monetization component. Today, we will focus on how to take the InVEST outputs that are biophysical in nature and assign a specific dollar value to them.

Reading the Landscape: Digital Elevation Models

Understanding DEMs

The first slide presented appears familiar from our prior work in QGIS. When thinking back to what we have been using, this data represents a digital elevation model, or DEM. These models display beautiful variation that almost appears organic in nature, even though it is entirely computational. The variation in elevation across a DEM can help identify geographic locations and features.

Identifying Geographic Features

Looking at this particular DEM, we can identify several key features. The image shows approximately where the Mississippi River is located, specifically at the bend where the Minnesota and Mississippi rivers join. The St. Paul campus is located in this region, and if we followed the river’s flow, we could see it continue throughout the landscape. If we zoomed out slightly, it would become much easier to identify the location because we would see the coastlines and other definitive features.

Elevation as a Key Input

Having spent considerable time in the geospatial world, one begins to see things differently. Elevation emerges as one of the key inputs in understanding landscapes. This perspective is crucial for understanding how we approach ecosystem services from a spatial perspective and how we eventually apply value to the services provided by these landscapes.

Agenda and Framework for Today’s Lecture

Overview of Topics

Today’s lecture will cover two distinct sub-themes. First, we will discuss the different types of economic value. While these concepts are present in many parts of economics, we will emphasize the parts most relevant to ecosystem services and the more general task of putting a dollar value on nature. This understanding is crucial for the cost-benefit analyses we conduct. Second, we will discuss the methods we might use to put specific value on ecosystem services. Thus, the lecture structure moves from types of value to methods for establishing that value.

The Total Economic Value Framework

We will return to a fundamental framework throughout this lecture and fill it out as we progress. This framework presents a taxonomy of the different types of total economic value, sometimes indicated as TEV. We will slowly build up to the complete diagram, but we will start with some of the sub-components. The first component we will discuss falls on the use side of things: use value.

Use Value: The Foundation of Economic Valuation

Direct Use Value

Direct use value is the easiest of all values to think about. It represents situations where we are directly consuming something in nature. The direct use components of ecosystem services include things that have market value. A fish, for instance, we can definitely buy from the store, or roots that we might forage for in a forest or purchase directly. Hunting certainly has a market value insofar as people buy permits for it, but sometimes the value contained in the permit is less than the total value that people would be willing to pay. Timber represents another straightforward example of direct use value with observable market value.

Non-Consumptive Direct Use

It is worth noting that direct use does not necessarily mean that the resource actually gets used up. Thinking back to our discussion of public goods versus private goods and consumptive versus non-consumptive goods, there are all sorts of things that have some degree of non-rivalness. Walking down a trail, at least initially if there are not too many people, does not consume it. It does not use it up, or at least it does not seem to vary much. We might suppose that if enough people walk down a trail, you would start to get erosion, or that if you have too many people going at the same time, it does start to be rival. All of those things we discussed regarding the rivalness versus the non-rivalness of public goods would be relevant here. The key point is that when emphasizing the ecosystem service value, it could be either type of good.

Rivalness in Ecosystem Services

Ecosystem services, just like rival versus non-rival goods, would have different characteristics. You would compete with other fishers to be the first to get the fish out of the lake right as the fishing season opens. So you would consume these resources, but you do not have to worry about that with a trail unless it becomes congested. The nature of the good—whether it is rivalrous or non-rivalrous—becomes important when considering direct use values.

Indirect Use Value

The Importance of Indirect Value

Where ecosystem services really start to matter is with indirect value. This is present in lots of parts of the economy but is especially important for ecosystem services. This is because lots of the ways that nature provides value to us is not through this direct use component. Rather, indirect value represents the ways that ecosystem services support something else that has direct use value.

Water-Related Indirect Values

A couple of examples here are with respect to water. Water is both a direct use value—you can directly drink water—and an indirect use value in the sense that ecosystem components support water provision. A wetland might help clean the water, which is then directly used. We would put a dollar value on this, but with respect to the drinking water component, it is indirect insofar as it makes something else more valuable. The wetland itself is not directly consumed, yet it provides value by enabling the direct consumption of clean water.

Forest Services and Water Filtration

Another example would be the vegetation and root structure of forests. We know that forests increase water filtration. Just like with the sediment retention model, forests hold water there rather than letting it run off across the landscape, and this allows it to seep in, which ultimately increases the amount of water available to the river over time. This would be something that contributes to the direct use of water, but the valuation of the forest itself would be indirect use because we do not consume the forest in this particular case. We are consuming the water that it provides.

Market Observability of Use Values

One thing to note is that the use value side—both direct and indirect—are all going to be things that have an observable market value. The purchasing of fish in a grocery store gives us information about how much people actually value that. The price is the perfect indicator of how valuable something is in an economy. For all the reasons we have discussed before, price reflects a situation where the market often finds itself, and that price is very accurate for the aggregate measure of how much people care about it. Lots of these things, whether direct or indirect use, will often have a large component that can be observed directly from the market. This is quite nice because the method for putting monetary numbers on ecosystem services can rely on those market values, making the process straightforward.

Option Value: The Bridge Between Use and Non-Use

Understanding Option Value

We must now discuss something that is kind of in between the two previous categories: option value. If you take a finance course, you learn a lot about option value, which represents the fact that instead of just paying for something you want to consume right now—like buying fish and then immediately eating it—option value is that people will put a positive willingness to pay, or WTP, to preserve an option for future use.

Financial Context

Finance discusses options through options traders. In finance, complex contracts exist that say things like if the price of a stock falls below a certain price, then people will buy it. These are actual contracts in place that give people the option to do something if something happens. This is one of the things that actually caused the 2008 financial crisis, but that is a whole different story.

Ecosystem Services and Option Value

For ecosystem services, this is just saying that we might put a positive value on avoiding something, especially if it has uncertainty or the possibility of degradation that is irreversible. Just like with the financial option, many people would be willing to pay a premium today for the right to have the benefits from the ecosystem later on. A classic case of this is the value of genetic resources in biodiversity-rich ecosystems. There are all sorts of direct values that people get from taking the genetic information from various species we discover and figuring out how they might be useful for making drugs. This is one of the major sources of drug discovery. There is research that shows it made a whole lot of sense to pay people to not cause degradation in these biodiversity-rich ecosystems, just because that might trigger irreversible actions that would have value we might use later on. Of course, you do not know exactly what the value is because you have to do the research and development to understand it, but there is direct value in preserving the possibility that value exists.

Precautionary Principle

One of the interesting things about option value is that it provides an economic rationale for being careful or precautionary. Even when current benefits seem low, option value might make it so that when we consider the preservation of that option for later, the value goes greatly up. This bridges the gap between market values and non-market values, which we will now turn to next.

Non-Use Value: Beyond Market Transactions

Introduction to Non-Use Values

Referring back to our original slide, those were the use values. The second major type is non-use value. These are going to be things that typically do not have market valuation possible. We will split this into two specific types of non-use value. First, we will have existence value. Second, we will discuss bequest value.

Existence Value

Existence value represents the fact that we might be willing to pay for a resource just to continue existing, even with no intention of using it. It sounds similar to option value, but option value is about preserving the option of using it, whereas existence value is purely non-use. An example would be people who will never visit the Amazon, and yet they still report that they would be willing to pay for conservation actions that preserve the Amazon.

The Paradox of Existence Value

From the ultra-rational economist’s perspective, this makes absolutely no sense. How can you be willing to put value on something and even pay for its preservation if you are never going to be the one who gets to consume it? Well, lo and behold, billions of dollars flow to organizations like the Nature Conservancy, where they are doing exactly this—taking money from individuals who are paying to protect nature they have heard about but will probably never go to.

Personal Experience with Tropical Ecosystems

The Amazon, for sure, will never see a personal visit there. The aversion is terrified because of going to a place like that with gigantic bugs and huge snakes. The closest experience was when on a trip to Sri Lanka for a summer and we went to a national park. Even though they are a lot less developed than what might be expected, we showed up at the main office and it was clear nobody had been there for weeks and weeks. We totally startled the forest guy keeping track of the national park. He was like, you want to buy a ticket here? He was kind of surprised, so he dusted off these official government tickets and gave them to us.

Wild Nature and Existence Value

We went out on this trail and the whole thing was absolutely terrifying. We made it around the first turn and there is this log across the way. We went to step over it, except the whole log was moving. There were tons of insects everywhere—millipedes and scary things. So there is a positive value on the Amazon, and this is definitely a non-use value. We do not want to go there. We immediately turned around. This was not the type of national park that we think of, where it is very well-manicured like a lot of U.S. national parks are. That is an example of people who would pay to protect it even if they do not benefit from it directly.

Charismatic Species and Conservation Funding

Another example would be charismatic species. A lot of money flows into environmental protection because of a handful of cute, cuddly, or sometimes ferocious-looking animals. The panda, the red panda, Bengal tigers—these are ones that are very charismatic. We probably care about the whole ecosystem, but something about our human value system is willing to pay more money for something that is cute, that has features that look a lot like us and look fuzzy. We seem to be willing to pay for it, which is a lot more possible to drive donations than trying to raise money for cockroaches or something like that. This is a big part of the flows of conservation funds into these particular species and their habitats.

Bequest Value: Intergenerational Equity

But even existence value does not quite get the whole picture of non-use values. When you think about what creates value for people, there is also a willingness to preserve things for future generations. Hence the word bequest. This one is especially important as you get older and start to think about what sort of world you are going to give to your children, grandchildren, or their grandchildren. This is motivated by intergenerational equity. It is even further removed from rational economics. Existence value takes a big step away because you are never going to consume it, but you would still put a dollar value on it. Well, here, it is not that you are not going to yourself get benefit from it, but bequest value captures that component of wanting other people to have the option of valuing this. We are not talking about our own future uses but other people’s future uses.

Richness of Economic Frameworks

One great thing is that if you think about standard economics and how simplified that could be, it would be all use value, all direct value, and all as measured through market values. But this framework provides a much richer understanding for how people actually do make decisions. This is probably true way beyond just ecosystem service values or other things. Probably a lot of the things in the market are also like this, but economics still has this bias to just the one line. The framework we have presented here shows the heterogeneity among these different ways that we value things. This heterogeneity means that the methods we have to use to establish all these different reasons we might care about or put value on nature means we are going to have to use a wide variety of methods to do that.

Moving from Types to Methods

The Necessity of Multiple Valuation Methods

That covers the many different types of value. The heterogeneity among these different ways that we value things means that the methods we have to use to establish all these different reasons we might care about or put value on nature, we are going to have to use a wide variety of methods to do that. This is the second key thing we are talking about today. We are now going to talk about different ecosystem services, which will have different components of value. This means we will have to explore a wide variety of different methods for establishing what that value is. So instead of types, we are now moving to methods.

Methods Overview

What are these methods? One way to organize these methods is to align this with the previous graphic. At the bottom of the previous framework, we talked about market valuation on the left and non-market valuation on the right. This methods framework is similar but moves down to more specific approaches.

Methods for Valuing Ecosystem Services

Market Valuation Methods

Introduction to Market-Based Approaches

For market valuation, we start with replacement cost. These methods work because we are observing something in the market—we see a person who had to pay the replacement cost or we see damages actually happen or could be avoided.

Replacement Cost Method

What is replacement cost? There is interesting research on wild pollinators. It turns out there are research and development funds being put into using tiny little drones that can actually replicate pollination services. They are given basic artificial intelligence to navigate around to different species, basically bump into them, and have little feelers that will collect the pollen and travel to other flowers or plants that need that pollination. That sounds expensive, right? Well, that would be very expensive as a replacement for pollination services that we get for free. Here is the point: it is not going to have a market value that is very easily observed, but it leads to the production of crops which do have a market value. If people are willing to spend actual money on these drones, then the value of the natural thing that we got for free does have a value. Put differently, however much you would be willing to pay for a replacement is a good estimate of the environmental value of the thing lost.

Wetland Replacement Cost Example

A classic example is with wetland values. If you lose a wetland, any municipal engineer will immediately know that you should be considering what alternative you are going to have for water filtration and storage. My parents are members of a church that was struggling financially, but a big condo wanted to build right next to them. One of the requirements is that environmental engineers said this condo is going to pave a lot of natural land, so they have equations they use for how much water filtration and storage is going to need to be installed. The church got an easement on their property and was paid $660,000 just for the ability to convert a field into a retaining pond. You see those all over, especially in the suburbs where there is lots of space and no underground piping. The solution is to build these retaining ponds that essentially hold the influx of water after a big storm but also let it slowly percolate into the groundwater. That $660,000 was a good estimate for what was the wetland value of keeping that land natural. So one way of thinking about it is that as we degrade nature, there are literally engineering and expensive solutions that we have to pay to get a replacement for the thing we were getting for free from the wetland.

Limitations of Replacement Cost

This logic is pretty strong because it even makes it into municipal budget planning discussions. But it can break down. It does not work in all cases. Number one, there is not always a replacement. The drones for bees one might turn out not to be a very good replacement; it just does not seem like it will work very well. But other things have simply no replacement, and so this will underestimate the valuation of the thing in that case. The method is therefore quite useful when replacements exist but must be used with caution when they do not.

Avoided Cost Method

Definition and Concept

The second method is avoided cost. It sounds similar to replacement cost but is different. Avoided cost values an environmental good or ecosystem good by how much costs on society it prevents from happening in the first place. These red bars indicate how much cost happens from flooding damages. Hurricanes are very expensive. Hurricane Sandy holds the current record at $61 billion of damage. It is very easy to observe that damage because it is buildings flooded, buildings knocked down—people paying for it. There is no doubt there is value and costs associated with the event.

Environmental Protection and Cost Avoidance

But if it happens to be the case, and it often does, that nature provides a way to avoid those costs, we should be putting a dollar value on that. An example here is mangroves. Mangroves are considered coastal armor because they grow right on the edge of saltwater. Anybody ever been to mangroves? They are so cool. The experience on a trip where we got to go snorkeling underneath them was remarkable. The water was only about this deep, but the roots would arc over like tunnels. The fish were just dense there, and it was like a maze. The root structure is literally armor. When you have a big hurricane or storm event, this will drastically reduce the amount of storm surge, waves, or other flooding drivers that make property just adjacent to it destroyed. There are many other types of ecosystems that do this besides mangroves—coral reefs have a huge protective value, seagrasses, lots of things do. But the basic idea is that if we did not have them, the amount of damages incurred by those buildings offshore would be much higher.

Overlap Between Methods

This is similar to replacement costs but different in so far as we are avoiding some damage rather than having to replace it with a substitute. There is a little bit of overlap because it is so expensive having hurricanes hit that mangroves will be estimated not just by the costs but also by the replacement value. They are literally putting up seawalls, and the amount of expenditure going into this globally is rising over time as oceans themselves rise and we have more extreme storms. We are paying a lot of money for really expensive solutions. Therefore, the mangroves would probably be able to have multiple ways to put a dollar value on it—the replacement costs and the avoided costs.

Common Applications

Common applications of avoided cost include flood protection, air filtration, and pollination. This works really well when the avoided costs are well defined, but it can get very hard when the damages are diffuse and spread over all of society. The case of hurricanes is easy to see because whoever owns that building, it is pretty obvious they were the ones damaged. But what about things like climate change through slightly increased temperatures that affect labor forces all throughout society? It is a little bit harder to see the direct damages, and it is hard because it is diffuse among the whole population.

Water Treatment Avoided Costs in Minnesota

Some of the work done in Minnesota—here are a couple of close collaborators—is to think about avoided costs of water treatment. Minnesota is a very agricultural state, so there is a lot of nutrient runoff. One of the many ways this is a problem is when nutrients get into the groundwater. Those folks who have wells pulling from that groundwater are now having polluted water come up, which causes all sorts of health problems, one of which is blue baby syndrome. As you increase nitrogen in these wells, people have direct damages that can be measured by the behavior of people who own wells and what they spend money on to try to mitigate this damage. A study by Bonnie Keeler and Steve Pulaski from this department looked at specific costs paid by specific well owners. The spatial data shows existing wells, and the color indicates the level of nitrate pollution. They also polled those people on what things they would do or have done to mitigate this damage: reverse osmosis with filtration through a membrane, distillation by evaporating it with heat, anion exchange with chemicals, or simply building a new well. People do this because you simply cannot use these wells once they get too polluted.

Future Scenario Analysis

These are real values that if nature had been able to filtrate the water effectively before it made it into the wells, these would be dollar values that these well owners would not have had to pay. They also did an interesting analysis where they modeled what would happen if agriculture expanded into the future. That would mean more agricultural fields and more nutrients being put onto the landscape. They modeled what the actual number of wells would be that would surpass the dangerous threshold under this scenario of agricultural expansion, and they saw that the dollar values just exploded. This is a good example of where you cannot just look at the actual present costs but where it might go under scenarios into the future.

Non-Market Valuation Methods

Introduction

So that is the market valuation side. Now let us switch to the non-market side. These market-based approaches work because we are observing something in the market—we saw a person who had to pay the replacement cost or we saw damages actually happen or could be avoided. But in many cases, it is hard to do that or there is no observable market transaction, so you have to get a little bit more clever. These are a lot of fun.

Hedonic Analysis

Definition and Origins

The first non-market method is hedonic analysis. This is something that exists outside of environmental economics. Hedonic comes from the root word hedonism—how do we get direct pleasure from things? It initially came about to try to understand how people care about things like houses—how much money is an extra bathroom worth? Hedonic analysis is a bunch of statistical techniques to analyze different transactions on houses of different qualities, like three bathrooms versus four, and make a predictive model for how much higher the price would be with an extra bathroom.

Application to Environmental Values

But in the domain of environmental values, what is really cool is this technique can use the same datasets—the transactions of all housing sales in a metro area—and look at how houses near environmental amenities sell for more. This is a useful way because the difference in price between, say, a home with a beautiful environment around it versus a home in an urban concrete area—that price difference is another way of putting a dollar value on how much people care about nature.

The Confounding Variable Problem

Who can think of a problem with this approach? What also might be true of houses next to a really beautiful park versus houses in an urban concrete area? Rich people tend to live near nature. We are all thinking it, and that is the case. Rich people tend to have houses in these nice areas. That is really just a reflection on the fact that nature is valuable. But it also is true that rich people tend to live in areas with lots of green space and access to parks. This creates a confounding variable problem.

Statistical Solutions

This is why this is a deeply statistical approach where you use regression analysis, which essentially means let us try to isolate the environmental attribute, like park proximity, from all the other housing characteristics. If we ignore other housing characteristics like the size of the house, then we are going to be obviously misestimating how valuable that environmental amenity is. But there are tons of examples where we have enough data to do this well.

The Loon Study Example

One fun example is “a loon on every lake.” This was a hedonic analysis of lake water quality in the Adirondacks. They collected a boatload of data, including the number of loons present on that lake in the year of the sale and other characteristics of loons. They did a bunch of other things, many of which were environmental like the acidity of the water, how close the house was to that water, and the size of the lake. They also had a whole lot of data on the attributes of the house like number of rooms, building age, building age squared—because maybe it has a nonlinear relationship—and the square footage of the house. If you combine all of these things, even ignoring the environment, you can make a very accurate prediction of what the sale price of a house would be. If you look at thousands and thousands of transactions, you can get very accurate predictions.

Isolating Environmental Effects

What they were interested in is not just predicting the price of the house but rather isolating out the effect of the presence of loons. The results show that if you had an eleven percent increase in loons present in the year of sale, the mean property value impact was $21,803. People love loons. People will pay a lot more money for a cabin on a lake that is pretty enough and wild enough to support loon populations.

Indicator Species and Environmental Bundles

You might want to break that down—maybe the loon is just an indicator species. They also looked at many other things. Maybe it is not just the loons but other bundles of environmental goods like better fishing. Nonetheless, you can use statistical analysis as long as you account for all the non-environmental things like the size of the cabin and distance from the city. The remaining price markup is a good estimate of how valuable nature is.

Travel Cost Method

Basic Concept

Another method is travel cost. Our goal here is to understand how much people value these things that are hard to get a dollar value on. Often, people will spend money on things that are necessary to be able to consume the environmental good. The basic idea of travel analysis or travel cost analysis is that the travel costs—the amount you spend to be able to make it to the park or whatever environmental amenity you are going to—is at least a good estimate of your willingness to pay. It is certainly a minimum estimate, because you might get a bunch of value on top of however much you spent traveling there. That is the whole point of going on a vacation. But at least it is better than zero. If people spend $1,000 to be able to make it to a lake, that is at least $1,000 of value they attribute to going to the lake.

Demand Curves from Distance Data

The cool thing about this is you can use that information. The fact that visitors come from all sorts of different locations either closer to or further from the park means we can actually look at how likely they are to make a visit to that park and compare it with the distance they had to travel, which is essentially the cost. We can get a full demand curve. This is a demand curve for that park where we have price on the vertical axis and the quantity of visits to that park on the horizontal axis.

Flickr Photo User Days Data

One of the reasons this one has generated a lot of interest is because of the data. There is really fun data you can use. This is research that was involved in at the Institute on the Environment down the hall. Researchers estimated something called FPUDS, which is Flickr Photo User Days. They found a really rich dataset that was essentially geo-marked photos. If you look on your phone, it often has a specific latitude and longitude of where your picture was. If that is uploaded to some database like Flickr, we can use that data.

Inferring Visitation from Photo Data

What we can do is make an inference that if lots and lots of photos are taken within a certain park, that is a good indicator that a lot of people actually traveled there. We can combine that to create an indicator of, in these different parks in northern Minnesota, how many photos per day on average were there. Then cross-reference this with a travel network showing what are the distances in roads it would take to get there to estimate the travel cost of actually arriving there. If anybody has ever done the Boundary Waters, it is very costly to get there, right? It is quite a distance away, whereas things much closer are quite a bit cheaper. This was modified by other data like what the actual entrance fees are, so everything close to the city was not necessarily the cheapest. They just had the cheapest travel component, but there would then be costs of using it.

Predicting Lake Quality Effects

The idea here—instead of thinking about houses, if we collected a bunch of data on things like lake size, lake clarity, depth, and all the other elements like whether or not it was in the Boundary Waters canoe area, whether or not it was considered a state park, and critically, whether or not it had invasive species present—we could take those variables and combine them in a statistical model with the travel cost to see how much these variables like invasive species would reduce how much people were willing to travel to those given parks.

Invasive Species Case Study

The basic point is that invasive species, like zebra mussels, will dramatically decrease the quality of the lake. There will be very little fish. These are an invasive species that essentially filter out all the nutrients, and they are also very sharp so you cannot step on them. We can actually get data to show how much less travel happens to these places. If we also know the amount of money that people would have spent on that travel, we can get a lower bound estimate of how valuable that lake is and how much value that lake lost by having invasive species come in. This is a little different than the house example but uses the same basic idea. As long as we can get costs and do a statistical analysis of how much it mattered to people in their decision-making, that becomes useful information for putting a price tag on nature.

Contingent Valuation

Historical Context and Oil Spills

We might have to save the last method for the next class, but we should touch on contingent valuation. A question that really motivated this was a bunch of oil spills and other disasters. A lot of people were upset by things like the Exxon Valdez oil spill, very famous in the history of environmental protection, because it creates all sorts of ecological damage.

The Survey Approach

If you were to ask Americans at the time, how much would you be willing to pay to avoid an oil spill, and simply added that up, this is sort of a brute force method with all sorts of challenges. But there are methods that let us figure out what is the overall amount that people would have been willing to pay to prevent, for instance, another oil spill. They did a huge study that found that on average each American would have been willing to spend $31 to prevent another oil spill similar to the Exxon Valdez oil spill. When you multiply that by the population of the United States, that gave a dollar value of $2.8 billion of damages that people in aggregate would have felt.

Legal Applications

What is useful is that these numbers can be used in court. They actually were. Lots of the lawsuits that came after, specifically the Exxon Valdez oil spill but many other environmental disasters, will do a contingent valuation. While the methods will be saved for next class, the main point is that in aggregate, we can identify how much people were damaged. This can be used in court, and Exxon Valdez actually needed to make these payments. These payments could either go directly to individuals or toward trying to restore the quality to where it was.

Recent Examples

More recently, we had the Deepwater Horizon oil spill put out 200 million gallons in 2010. That is the one where it dramatically blew up. This exact same approach was used there. The contingent valuation method has proven to be a useful tool in environmental litigation and policy.

Bringing Methods Together with Biophysical Models

Integration of Valuation Methods with InVEST

Just to close out for today, remember why we spent a lot of time showing where ecosystem services are provided. Most of those were biophysical indicators, essentially the quantity of ecosystem service. But now we are talking about a big grab bag of different methods where we actually can put a price on it. It was not emphasized in the ecosystem service models we ran, but many of them have options for putting different price tags from these different methods onto those ecosystem service goods. The methods we have discussed today—replacement cost, avoided cost, hedonic analysis, travel cost, and contingent valuation—can all be applied to the biophysical outputs from tools like InVEST to create comprehensive valuations of ecosystem services.

Next Steps

We will pick up with contingent valuation in the next lecture, where we will dive deeper into the methodological details of this approach. Additionally, there are quizzes to hand back. The scores have been online, but if you would like to see the actual quiz results from Micro Quiz 3, we can discuss those after class for those individuals who are particularly interested in reviewing their performance. The names of those who should stay for discussion include Denton, Alex, Kellen, Rhea, and Griffin. The rest of the quizzes have been left in the office and can be picked up at a later time. More details will be provided next time. Thank you.

Administrative Announcements and Course Updates

Room Change and Meeting Logistics

The class will return to the normally scheduled room for the remainder of the semester. There was only one scheduling conflict, and the instructor will end the class five minutes early to prevent delays for another group setting up their event in the space.

Guest Lecturer Schedule for the Following Week

The instructor will provide updates on upcoming events and personal travel plans, as well as details about guest lecturers who will be visiting the class. On Monday, the instructor will teach as scheduled but will need to leave immediately after class to drive to the airport for a flight. There is approximately a ninety percent chance of making the flight, though the timing will be tight and will depend on factors like TSA wait times.

Colleen Miller, who serves as Senior Biodiversity Scientist at NatCap, will visit on Wednesday to discuss how biodiversity forms the basis of all ecosystem services. This topic is sometimes taught before diving into ecosystem services discussions because the rich and complex web of life that comprises ecosystems is the fundamental reason that nature provides value to human society. By presenting this content later in the course, it will serve as a retrospective look at the foundation underlying all the ecosystem services that have been discussed throughout the semester.

Distinguished McKnight Professor Carlisle Ford Rungi will visit on Friday to discuss land and the history of economic analysis—or the historical exclusion thereof—regarding how land affects the economy. A key historical point is that original economists greatly cared about land as a factor of production. However, during the nineteen-fifties, sixties, and seventies, economists decided that land was simply identical to any other type of capital. This decision led to economic models that focused only on labor and capital while ignoring land. This fundamental shift in economic thinking has been greatly detrimental to understanding environmental economics and the role of natural resources in economic systems.

Personal Travels and Professional Speaking Engagement

The instructor will be traveling to the Chilean Central Bank to present on earth economy modeling. This is the same location where similar materials have been presented previously, so some slides will be reused from earlier presentations. The instructor will be giving a keynote address at a large international conference with representatives from many different central banks who want to implement earth economy modeling. The content being taught in this class will likely be incorporated into that presentation to central bankers, though the audience will differ significantly from college students. Instead of students, the presentation will be heard by individuals with substantial financial resources who make important decisions regarding environmental protection. This provides an interesting opportunity to demonstrate how the academic material has real-world applications for high-level policy and financial decision-making.

Instructor Health Update

The instructor experienced a kidney infection over the weekend, which resulted in a high fever reaching one hundred two point nine degrees Fahrenheit. While this was unpleasant, the instructor is not contagious and is on antibiotics. Kidney infections typically resolve quickly, unlike lingering flu viruses or coronavirus infections. The instructor is feeling better and is ready to continue with the course.

Course Structure and Final Project Overview

Introduction to the Final Project Framework

The instructor has updated the course website with the final project link and will walk through the details of the assignment. While the instructor has been indicating the general direction of the final project, the official assignment details have now been finalized. The final project is built on the foundation of all the skills and concepts introduced throughout the course.

The core concept of the final project is that students will imagine being asked by a senior policymaker in an assigned country to prepare a briefing document. This mirrors exactly what the instructor will be doing the following week, except the briefing will be for a central bank policymaker in Chile. The briefing will address what earth economy interactions the country will face over the coming decades and what actions should be taken in response.

Central Bank Perspectives on Environmental Economics

Central banks have long been analyzing climate change and are increasingly concerned about systemic risks to their countries’ economies. The mandate of central banks is to maintain a stable economy and stable currency. However, central banks are increasingly thinking beyond climate change alone, recognizing that the challenge involves both climate change and nature. There is growing interest among central banks in ensuring that their countries are resilient to both climate change and possible nature crises. The final project asks students to write a briefing that addresses this emerging concern from the perspective of a hypothetical central bank.

Final Project Content and Themes

The student reports will address key themes including market failure, sustainability, climate change, land use change, ecosystem services, future scenarios, and essentially all the topics covered throughout the course. These are the foundational concepts that students have been building towards since the beginning of the semester. The final project represents an integration of all these disparate topics into a coherent policy analysis.

Project Components and Grading Structure

The final project has two main components. The first component is a five-minute lightning talk where on the last day of class, all students will present their work. Five minutes is a brief presentation window, typically allowing for approximately two to four slides. The second component is the written report itself, which contains more detailed information and a more comprehensive rubric specifying expectations for each step of the project.

Key Deadlines

The rough draft of the final report is due on the second-to-last day of class. Students will present the slides version of their project on the last day of class. The final report is then due on the final exam date. These staggered deadlines allow students to receive feedback on their rough draft and incorporate those suggestions into their final submission.

Proposed Changes to Course Grading Structure

Rationale for Eliminating the Final Exam

The instructor is proposing a significant change to the course structure. In the initial syllabus, both a final report and a final exam were planned, similar to the midterm examination. However, upon reflection, the instructor believes that this type of material is not well-suited to a traditional examination format. Microeconomics material can be effectively assessed through a traditional exam, but when the course moves into spatial analysis, policy thinking, and sustainability, essay questions do not work particularly well as an assessment method. The material is better suited to the kind of in-depth analysis and synthesis that the final project requires.

The instructor proposes that the final project replace the points that would have been assigned to the final exam, eliminating the final exam entirely. This means that students will not have to sit down on May twelfth and write essay questions by hand, which is the only way to administer such an exam in the age of ChatGPT and other language models. This change would allow students to focus more time on creating a quality report that demonstrates their understanding of the course material.

Advantages of the Proposed Structure

One significant advantage of having the final exam go away and replacing it with the final project is that students have substantial time to respond to feedback and make appropriate revisions. Rather than having just a single deadline for the report, the staggered deadline structure provides time between the rough draft due date and the final due date for students to make meaningful improvements. Students can also use feedback from their presentations on the last day of class to improve their final reports. They will essentially extract slides from the essay, and can guess the key figures they will make, such as maps of their country with one or two of the ecosystem services. These figures will appear in the report, but can then be easily copied and pasted into the presentation slides, providing efficiency in the project creation process.

Process for Implementing the Change

The instructor is trying to be fair to all students because the syllabus represents an agreement made at the beginning of the semester. Changing things in the middle of the course could potentially be unfair. However, the instructor believes that everyone will be in favor of this change because it provides superior educational outcomes. Here is how the change will be implemented: the instructor will update the website with the new percentages for the final grade, taking the score that would have gone to the final exam and allocating it to the final project. The instructor will post this change and send an announcement to the class. Students will then be given a couple of days to anonymously submit any concerns about the change. If there are no concerns, the class will move forward with the new structure. If concerns are raised, the instructor will address them.

An alternative approach would be to allow students to choose between doing the final exam or just the final project, but this would not be a good solution. The students choosing the final exam would still face the same amount of effort on the final project, and would receive only fifteen percent of their score from an additional exam they would have to take. This would create an inequitable and inefficient situation.

Final Project Details and Requirements

Report Length and Structure

The report should be approximately two thousand words as a guideline, though the instructor will not count words precisely. Instead, the instructor will evaluate whether the student makes the key points that have been introduced in the assignment instructions.

Required Content Areas

Students must include discussion of their assigned country and how it fits into the planetary donut framing or other relevant framings of environmental economics. The report should discuss the specific challenges and market failures that the country faces. Students must discuss land use in their country and the ecosystem services that are present or absent. The report should analyze natural capital in the context of the country. Students should examine future scenarios under different shared socioeconomic pathways (SSPs) and what those scenarios mean for their country. The report should conclude with policy recommendations and conclusions that flow from the preceding analysis.

Data Sources and Tools

The data that students will use are materials and datasets that have been introduced throughout the course. The instructor has collected key links together, including the SSP database which shows what will happen to GDP in each country under different assumptions about socioeconomic development. Students will use real data that analysts and policymakers use in professional settings. Additionally, students will use country geospatial data that the instructor has collected, which will be discussed in detail during the lecture and which students will use to run InVEST and conduct other spatial analyses for their projects.

The Course as Foundation for the Project

This entire course has been building toward the final project, which is why the instructor has been enthusiastic about developing it. This is the first time the instructor has taught this course. The course is not primarily an environment and natural resources course in the traditional sense—only about four lectures have focused on that topic. Nearly all of the other course content constitutes what will eventually be rebranded as “Earth Economy Modeling.” This is the frontier of where this type of analysis is going, and earth economy modeling is becoming an increasingly important approach to environmental policy and decision-making.

Valuation Methods: Contingent Valuation and Choice Experiments

Overview of Contingent Valuation

The instructor is returning to the slides left off from the previous class on valuation methods. One of the last methods discussed was contingent valuation, which is particularly important in environmental economics. Contingent valuation has been used for major environmental disasters like oil spills. It has been especially important to environmental economics discourse because the dollar values assigned through this method—often billions of dollars of damages to the environment—are what corporations have to pay if they lose lawsuits related to the environmental damage.

Legal Framework and Methodology

Contingent valuation works through court cases. Some body of people sues an oil company, arguing that the company has caused them harm. This is a very standard type of lawsuit, similar to a case where somebody crashed into your car and refused to pay. In such a case, you would have a civil claim against them for damages. The fundamental difference with environmental cases is that it is much harder to put a precise dollar value on the damages.

In a car accident case, you go to a mechanic and ask how much it will cost to fix the car. The court case then becomes about whose mechanic is correct regarding the repair costs. The assessment of whether the car is properly fixed is relatively clear and straightforward, so there is usually not a great deal of variation in expert opinions. Environmental cases follow the same basic structure, but with an important difference. You have experts—scientists instead of mechanics—who propose how much it will cost to restore or remediate the environmental damage.

Identifying and Quantifying Damages

In an oil spill case, some costs are relatively easy to identify. These include the amount of money spent on containment boats, the cost of fuel, and the captain’s salary. However, there are many other types of values that do not have an easily identifiable dollar value attached to them. What is the value of all the birds that were lost in the spill? While these damages may be identified by experts, they do not have a market value that can be easily discovered. When experts go to court, they use methods specifically like contingent valuation to demonstrate what the average person would pay for preventing that environmental harm from occurring.

Data Collection Approaches

The general approach used in contingent valuation is to ask people, either in a laboratory setting or in the field, how much they care about this environmental resource or avoiding this environmental harm. One approach involves experts mailing physical letters to randomly chosen individuals, asking how much they would be willing to pay to prevent or remediate the environmental damage. This approach is problematic because respondents can make up whatever number they want without any actual obligation to pay it. There is very little constraint on the answer they provide.

Some better approaches use real money to make the contingent valuation exercise more realistic. A key example comes from research on hunting access. Hunting permit applicants were given a choice between having their free hunting permit or receiving a one hundred dollar gift card, but not both. Hunters who really love hunting would presumably prefer their license, and researchers have good evidence that they valued it more than one hundred dollars. The researchers sent out different gift card values to different groups to figure out how much hunting was actually worth. This approach is clever because it used real money rather than just a fill-in-the-blank survey where respondents provide a made-up dollar value.

The original study in this area actually sent a check to respondents and checked whether it was cashed. If the check was cashed, the researchers canceled the hunting permit. Though this seems harsh, the point is that there are many different ways to ask people and get a dollar value. If done well, these values are hopefully usable in court to establish damages and appropriate compensation.

The Problem of Strategic Responses

Problems with contingent valuation arise when researchers do not use payment card methods or real money. When people see these surveys and they are clever enough to know it is probably for environmental valuation and they really like the environment, they will report a really high value for the environmental resource because they know they do not actually have to pay. There is very good evidence in the academic literature that respondents claim willingness to pay that is two to three times higher in studies that do not use real money compared to studies using real money. So people do game the system when they have the opportunity to do so without real financial consequences.

Connection to Choice Experiments

Contingent valuation is tightly related to choice experiments, which derive from a similar underlying idea but consider a more complex set of choice options. A typical choice experiment might be aimed at eliciting what ecosystem people really prefer. Respondents might be given a choice between an ecosystem that is a park with benches and garbage cans and development that changes species distribution—perhaps mice come in around the garbage cans while songbirds leave—versus another option with a path but no infrastructure and a different set of animals.

This method asks people which bundle of goods they would prefer. There was a real Department of Natural Resources study trying to figure out the value of lake clarity under different configurations. The study had lots of different scenarios with things like lake color, boat launch availability, and distance from the respondent’s home. By polling enough people, researchers can create a statistical model that predicts which lake configuration people would choose. This approach is useful for eliciting value on a bundle of different ecosystem amenities. A clear lake is generally good, but whether people prefer it depends on whether they are a fisher person who values fish or a wakeboarder who values clean water for the activity itself. Choice experiments can help disentangle these preferences.

Valuation Methods: Benefits Transfer

Definition and General Approach

The last method in the valuation toolkit is benefits transfer, which is sometimes referred to as the dark arts. It is called that because most academics find this method very problematic. However, it is the easiest to implement, so high-paid consultants often use it when the government pays them to put a dollar value on nature. The benefits transfer approach involves taking existing studies where somebody did a good job assessing ecosystem service value through any of the other valuation methods and applying those values to new contexts.

Implementation Process

The benefits transfer approach uses existing studies as its foundation. Researchers conduct a literature review, extract the stated willingness to pay or estimated value from those studies, and ideally construct a function describing all studies. This function maps how the value depends on something like lake clarity or some other variable that affects ecosystem service provision or the demand for ecosystem services. Then researchers take that willingness to pay value and transfer it to all hectares of that particular land type.

The Costanza Example: Valuing Global Ecosystems

A famous and heavily criticized example of benefits transfer is the Costanza estimation of how valuable all of nature is globally. The researchers found that nature was worth thirty-three trillion dollars. What they did was one gigantic benefits transfer, transferring studies on how much a particular parcel of land is worth to all such hectares of land on Earth with that same type. In the Costanza case, they had studies on how valuable ocean water access was for fishing and recreation, and then assumed every hectare of ocean had that same value. This created an enormous overestimate of the value of global ocean ecosystems.

Problems with Spatial Heterogeneity

There are obvious reasons why this would not be a good method for establishing the total ocean value. The studies that the Costanza analysis was based on were heavily biased towards extremely high-value areas. Miami’s real estate amenity value from beach access is probably much higher than the Congo coastline and exponentially higher than a random hectare in the deep sea with no beaches, no surfing opportunities, and limited accessibility. The fundamental problem is that it is hard to get a correct value that scales appropriately with spatial heterogeneity across the globe. The Costanza numbers were heavily criticized for essentially taking the value of Miami beachfront as the value of every single part of the entire ocean, which obviously does not seem correct.

Legitimate Applications

There are legitimate applications of the benefits transfer method when it is done less egregiously and with more attention to spatial and ecological heterogeneity. However, when ecosystem services start becoming valuable to governments, consultants who make money off this science crop up and can essentially make the science bad. These consultants will do whatever is easiest and easiest to sell to their clients, which often means oversimplifying the analysis and transferring values inappropriately across contexts. This is a significant problem in applied environmental economics when real money is involved and there are financial incentives to frame problems in particular ways.

The Reality of Ecosystem Service Valuation in Practice: The Minnesota Lakes Case

Complexity Beyond Modeling

The instructor wants to tie the theoretical discussion of valuation back to real-world practice in Minnesota. While the course has discussed ecosystem service models and noted that tools like InVEST make it easy to estimate value, in reality when you look at specific cases, the work becomes much more complicated. It is hard to put an adequate dollar value on something when there are many different types of users with different preferences and different benefits from environmental changes.

Research on Water Quality Valuation

There was a really good study conducted by Bonnie Keeler, a former employer of the instructor and now director of the Water Research Institute, where they looked at this complexity closely. The researchers tried to figure out what you need to consider when tracking why people might value water quality in lakes. They argued that you need a valuation approach that is sensitive to different actions that affect water quality. Additionally, you need to identify different use endpoints—how people actually use the water—and must recognize that there are unique groups of beneficiaries who are all differently affected by environmental changes.

Framework for Water Quality Valuation

The researchers presented a systematic way of thinking about how to assess some set of actions affecting water quality and see how, under different action sets, you have changing water quality. Research links those physical changes to ecosystem services. Then, once you get a change in water quality, you identify the change in ecosystem services that results. But critically, as a final and essential step, you have to think about the change in value that is specific to different benefit groups. Different people and communities will be affected differently by changes in water quality, and the economic value of those changes will differ accordingly.

Drivers of Water Quality Change in Minnesota Lakes

The researchers mapped this framework out with the specific case of Minnesota lakes. They started by asking what actions might happen that could affect water quality. They identified primary and secondary drivers of water quality change. For example, with nitrogen from applying fertilizer on agricultural land, you have an action causing increased nitrogen loading to water bodies, which affects water clarity through algal blooms. These algal blooms have secondary effects on fish abundance and pest abundance. The researchers continued analyzing all different actions and drivers, including sediment loading, temperature changes, or toxin inputs.

Effects on Different Value Components

These drivers have different effects on different parts of the value change. They identified specific Minnesota lake ecosystem services: lake and river fishing opportunities, swimming, boating, trout angling, nature viewing, navigation, hydropower, commercial fishing, and safe drinking water. These are things for which people spend a lot of money and which provide significant value to communities. You then have to get from that physical change in water quality—which affects opportunities for swimming or fishing—to the dollar value associated with it for different groups.

Valuation Methods for Minnesota Lake Ecosystem Services

The researchers enumerated all the different valuation methods and their applications to Minnesota lakes. From their work, you can see references to specific valuation examples: avoided sedimentation through avoided water treatment costs, which is addressed using the avoided costs method; the value of swimming, which is harder to estimate but might come from contingent valuation studies; and the value of avoided death or illness through contaminated irrigation water. There is a whole additional set of literature on how to deal with changes in ecosystem services and their value when they prevent people from dying or becoming ill. There will be a special lecture on this topic, and this component represents a big part of the overall valuation puzzle in water quality applications.

Introduction to the InVEST Annual Water Yield Model

Model Overview and Purpose

The instructor now wants to quickly introduce the InVEST annual water yield model. Students do not need to open up or access the model during this lecture—the instructor will save actually running it for the next class. The instructor wants to go slowly and spend time introducing the model because this is where students will actually look at the data they will be using for their final projects.

Defining Water Yield

The basic concept the model addresses is water yield. When researchers refer to water yield, they are referring to water that is yielded into the economic system—water that becomes available for human use. In this case, the specific water yield being modeled is water that ends up in a reservoir, the area behind a dam. This water is particularly useful because it can be pumped to a water treatment plant for drinking water or straight to fields for center pivot irrigation for agricultural production.

Factors Determining Water Yield

What factors determine what water is yielded into a reservoir? Obviously, precipitation matters as the key input to the system. However, a whole bunch of other factors matter as well. You have to think about underground actions and vegetation actions that are important for determining the final thing that people care about: yield. Understanding water yield requires thinking about multiple interconnected hydrological processes.

Water Inputs and Groundwater Recharge

You have to think about the inflow—similar to how sediment retention works, water flowing in from other locations in the watershed matters. The precipitation combined with the inflow goes into the underground, and some of that becomes groundwater recharge, which is certainly valuable for anybody with a well. This groundwater recharge is why your well does not go dry during droughts. But from the perspective of the reservoir, stuff that infiltrates into the ground is not available as yielded water that can be pumped from the reservoir. There is a fundamental trade-off between groundwater recharge, which is valuable for some uses, and surface water yield, which is what ends up available in the reservoir.

Evaporation and Transpiration

The second major process everybody knows about is evaporation. Depending on temperature, wind, and exposure, some of the precipitated water evaporates and returns to the atmosphere before it goes below ground. But the critical thing that many people forget is the complexity of the ecosystem process through transpiration. Transpiration is the water on the surface or underground that gets sucked up by plants. Plants use this water to produce sugars and other compounds that help them grow. But for the purposes of water yield, what is really important is that plants make water not get yielded to the reservoir, so you need to account for what vegetation is present in the watershed.

Complex Effects of Vegetation on Water Availability

This matters in some good ways and some bad ways. Vegetation can slow down water flow and increase groundwater recharge by slowing water movement through the soil. But vegetation also changes timing of water availability. Some water will transpire up into the atmosphere through plant leaves, which is bad for short-term yield, but it is good in another sense because transpired water eventually precipitates again elsewhere, spreading out the window in which water reaches the reservoir system. This temporal smoothing of water availability can be quite valuable for water management.

Pause for Next Class

The instructor will pause here and pick up the detailed discussion and modeling of water yield in the next class when the actual InVEST model will be run. That exercise will involve using geospatial data to compute and understand water yield in specific contexts. The basics of what will be computed using InVEST and geospatial data have now been introduced, and students should have a conceptual foundation for understanding the model when it is actually demonstrated.

Introduction and Course Logistics

Welcome and Daily Agenda

Day 3 of the valuation slides builds upon the PowerPoint presentations that have been used throughout the course. Students are encouraged to refresh their browsers if they maintain multiple Chrome tabs, as the slides have been updated with new information and examples. The agenda for this lecture session focuses on diving directly into water yield analysis using country-specific data, which simultaneously provides students with practical preparation for their final projects. Following the water yield section, the lecture will transition to examining the value of a statistical life, remaining within the ecosystem service valuation framework because ecosystem services contribute significantly to human welfare by literally keeping people alive. The extent to which ecosystem services reduce mortality through channels such as air pollution reduction represents one of the key mechanisms by which value is obtained from these services.

Administrative Announcements Regarding the Final Exam

Several logistical comments deserve attention at the outset of this lecture. No objections were raised regarding the proposed modifications to the final exam structure, which means that the final project will now take the place of the traditional final exam. This structural change offers students considerable flexibility, as they will be able to submit their final projects early, which solves any travel-related complications for students with holiday plans or commitments. Students will present their final projects on the last day of class, which occurs many days before what would have been the final exam day—approximately the fourth of the month. Because no objections were received from the class, the syllabus and grade weightings in Canvas will be updated to reflect these changes accordingly. This modification will give students more time to focus on what the instructor considers the novel and meaningful aspect of this class rather than traditional midterm-style examinations. The instructor sought confirmation that students were comfortable with these changes before proceeding.

Upcoming Guest Lectures and International Engagement

Speaking of reports and presentations, the instructor has been consistently referencing guest lectures, and the preliminary agenda has now been released. The instructor will be giving a keynote lecture at this series of guest lectures. The main speaker is the head of the Network for Greening the Financial System, which is an organization composed of approximately 140 central banks worldwide working cooperatively to future-proof their banking strategies so that climate change and nature collapse do not cause their economies to collapse. These represent really significant names and institutions in the field, which understandably creates some nervousness for the instructor. The keynote will require speaking for an hour and a half, which is longer than any lecture the instructor has previously given, and this has necessitated generating entirely new slides for the presentation. As a Midwesterner, the instructor remarked that they do not typically like to boast, but in this case they decided to mention this significant professional opportunity.

Water Yield Model: Theory and Application

Scientific Foundation of Water Yield

Water yield, one of the ecosystem service models explicitly included in the report to be presented to the Chilean Central Bank, has already been introduced through a quick discussion of its scientific basis at the end of the previous class. The basic science of water yield relies on the relationship between precipitation and evaporation, but this relationship becomes substantially more complex when plants enter the equation. Plant roots play a really significant role in the water cycle by increasing the extent to which groundwater recharges into the soil rather than simply running off the surface. Simultaneously, plants extract water from the soil through their root systems and transpire this water back into the atmosphere, further reducing the net water yield available for other uses or reaching downstream areas. This combination of processes—infiltration through plant roots, soil percolation, and transpiration—creates the fundamental dynamic that the water yield model attempts to capture.

Connecting Theory to Student Data Collection

The instructor wants to connect the water yield theory to the data that students have been collecting for the class throughout the semester. Screenshots included in the course slides show how the instructor has organized their own data if students want to refer back to this example for their own organizational approaches. Students are obviously free to organize their data however they prefer, but the instructor’s approach provides a useful template. The instructor maintains a folder structure for APEC 3611, which contains the repositories used to publish the course website. Previously, students had been using InVEST base data, where all the different ecosystem services and their associated data were ready to input directly into the model without significant preprocessing.

The Challenge of Real-World Data Organization

The instructor has taught InVEST many times across different courses and institutions, and almost universally the main catching point is when students rightfully point out that it is easy when all the data is provided in a pre-organized format. The experience of using InVEST feels falsely easy when all the data is nicely prearranged by someone else. That is why for this particular class, the instructor actually recommends having another separate folder to keep things organized for the final country report. That separate folder is where students should download their country-specific zip file. The instructor will use Nicaragua for demonstration purposes, believing that nobody else had selected Nicaragua for their country project. This zip file has been placed in the appropriate location, and the instructor and students will be using this data for the InVEST analysis during this class session. Just by looking at the initial exploration of the Nicaragua data, it becomes apparent that it is not organized by model, and therefore one of the key jobs students will undertake in their country reports is figuring out what is actually contained in these data files and how it should be oriented.

Data Literacy as a Professional Skill

As a dedicated researcher, the instructor notes that most of their time when actually doing new research projects is spent just looking through datasets and reading documentation. This is a very real skill that professionals use constantly in environmental science and related fields. One of the key files the instructor wants students to reference is the documentation included with the data packets. In Nicaragua, this documentation looks somewhat different for each country, but not dramatically so. It is very tempting for students to ignore files like readme.pdf, but the instructor strongly cautions against this approach. These readme files contain really condensed and critical information. All of this data comes from the Integrated Economic Environmental Modeling Platform created by one of the instructor’s colleagues and friends, O’Neill Banerjee. The documentation describes what is inside each data packet. These packets are designed to be plug-and-play for the four InVEST models that students have the option of running in this course. The documentation also gives a really good set of descriptions for how students might go about using the models, describing the different sections and providing important notes on all four models. The instructor really recommends reading this documentation carefully to get up to speed on what is contained in the data package.

Data Sources and Documentation Structure

The documentation will describe where the data came from and how it was assembled. If students were undertaking this analysis for another region for which they did not have a person providing them the data already compiled, this documentation section is where they would go to understand exactly how to proceed and where to obtain the data themselves. It documents precisely how students would conduct the process and gather the necessary datasets independently. A really useful table is included in the documentation showing the four ecosystem service models that these data packets are built to support. This table also shows which of the data layers are used in which of the different models. For instance, the carbon storage model will use land use data and information on soil carbon storage, while the annual water yield model will use substantially more data layers. Students should check out these tables in detail to understand the full scope of what data are involved in each model.

Country-Specific Considerations and Data Challenges

For specific countries, many of them have country-specific information about the data included in their documentation. There are always different challenges when working with data from many countries, and even when using global datasets, there might be different complications or complete omissions of data that need to be understood. Students will want to read through these country-specific notes carefully. A lot of the information will be the same as what is in the general README, but these notes will highlight any important things that need to be known for working with that particular country’s data. The instructor would strongly recommend that if students are using Google Drive Desktop Sync, they should copy this data over onto their computer before beginning analysis. It can get quite challenging when pointing to cloud-hosted files and trying to perform intensive geospatial analysis on them. The best and most reliable way to do this work is to download the data. The instructor has downloaded the zip file and extracted it on their local computer, so they are not working directly from Google Drive, and they recommend this same approach for students.

Getting Started with InVEST and QGIS

Initial Setup and Workspace Configuration

With all that background and setup information covered, the instructor invites students to open up both InVEST and QGIS, as they are going to dive straight into working with the water yield model. As InVEST loads up on students’ computers, they should see the home screen with all the available models listed. If InVEST loads directly into one of the different models and students do not see this full list of options, they can simply click on the home button that brings them back to the interface showing all the available models. The class will be working with the annual water yield model today. This process should start to feel familiar now that students have gone through this setup with two other models in previous sessions. The big difference in today’s session is making sure that the workspace is configured to point to the country-specific folder that each student is using. For the instructor, teaching this class means the final report will be based on Nicaragua, and that is where the workspace should be configured to point. That is where the instructor recommends setting each student’s workspace as well.

ISO Country Code Conventions

The instructor pauses to ensure that everyone is following along appropriately, because they do not want to power ahead and leave students behind who are still setting up their workspaces. Students should note that their file will not be named NIC, which is the three-letter code specific to Nicaragua. Each student’s country-specific files will have their own three-letter code corresponding to the ISO 3166-1 standard for assigning three-character letters to countries based on international conventions. The instructor asks whether students have the country data downloaded yet or whether they are still in the process of finding and downloading it. After confirming that some students are still working on getting their data, the instructor indicates they will continue moving forward while checking back again just to ensure everybody has reached this point in the setup process.

Using File Suffixes for Model Iteration

One thing to note about the InVEST interface is that students have been skipping over the file suffix option, but this is actually a really useful trick that becomes important when conducting sensitivity analysis or testing different model specifications. If students want to run multiple different versions of the model—for instance, to test what happens if they use one precipitation layer versus another—this small text box lets them put a suffix on the end of all the output files. So instead of each new run completely overwriting the previous files, the outputs from each run will have that suffix appended to allow them to be kept separate. This means students will end up with multiple different output files that they can compare to understand how changes in inputs affect the model outputs. This functionality is possibly quite useful for their final projects.

Water Yield Model Parameters

Precipitation Data

For the precipitation parameter, that is obviously one of the main drivers of water yield and the amount of water available for the ecosystem and human use. This is where students are going to get themselves accustomed to navigating a different file structure than they may have used before, but the Nicaragua example’s file structure is pretty straightforward to follow. The precipitation file is named annual precipitation, and in the instructor’s example, it is specifically the Nicaragua annual precipitation file. Students should look for the equivalent annual precipitation file for their own country in their country-specific data folder.

Reference Evapotranspiration

For the reference evapotranspiration parameter, the instructor has located an example file, and in the Nicaragua case, it is named Nicaragua reference evapotranspiration. This parameter drives the amount of water that is returned to the atmosphere through evaporation from soil and water surfaces plus transpiration from vegetation.

Root-Restricting Layer Depth

Root-restricting layer depth is an important parameter in the water yield model that can sometimes be overlooked. This parameter is represented as a geospatial map indicating the soil depth at which root penetration is strongly inhibited because of physical or chemical characteristics of the soil. This parameter really matters because it indicates how far down into the soil profile plants can push their roots before hitting bedrock or other impenetrable layers. If you are on a mountainside that has not had much vegetation growing on it for very long, you are going to have pretty thin soil, and the root-restricting layer is going to be relatively close to the surface. Essentially, the root-restricting layer depth indicates where the soil transitions to stone. That information can be derived from soil maps created by soil scientists and other experts, and it has already been preprocessed and provided in the data packages.

Plant Available Water Content

Plant available water content depends on the vegetation type of the area and is represented as a fraction. This parameter answers the question: what is the fraction of water that can be stored in the soil profile available to plants? This depends on all sorts of different attributes of the soil, including texture, structure, and mineralogy, but it is absolutely critical in the water yield model context because it determines how much water percolates into the groundwater versus how much is evapotranspired up through the plant back into the atmosphere.

Land Use Land Cover Data

Land use land cover is a parameter that students have worked with before in previous ecosystem service models. In this case, the data layer used is the LULC CI, which stands for Climate Change Initiative, a project of the European Space Agency. The European Space Agency, known as ESA, does a whole lot of interesting research nowadays, and although they may not have rockets as advanced as some other space agencies, they put their rockets to better use. When specifying this parameter, students are going to point to the .tiff file, which is the raster format, rather than pointing to any XLS or other tabular format files.

Biophysical Tables

For the biophysical tables parameter, this information is found under Model Lookup Tables in the data structure. Students need to make sure they point the model to the annualwatercci.csv file specifically, which contains the necessary biophysical coefficients for the water yield model applied to their landscape.

Estimating Model Parameters

The Challenge of Parameter Estimation

Now it is going to be a little harder for students to fill in the remaining parameters without having them explicitly provided in the data. If students really want to impress the instructor on the final project, they could look up the values for these different parameters specific to academic literature published for their particular country. Alternatively, students could use the user’s guide for the InVEST model, which gives them the typical range of values that these parameters take across different environmental contexts. For the purposes of getting the model running today, students can just choose something in the middle of that range. Of course, students could always spend a lot more time justifying these parameter choices better if they wanted to really develop their projects. If students submit this work to peer review and go down the route of becoming a scientist, this is usually where they get criticism: why did you choose a value of 15? Is that just the middle of what the user’s guide says? The reviewer would ask whether that is based on local research or expert opinion. That is not usually considered a very strong argument from a scientific standpoint. But it might work for the purposes of this class assignment and for getting started with the model. For today’s demonstration, the instructor is going to use a Z parameter value of 15.

Watershed and Sub-Watershed Data

Then there are a few more elements that need to be specified. Students are going to have to give the model both a watershed vector file and a sub-watershed vector file. Before proceeding with the technical details, the instructor wants to talk about watersheds themselves, as they are a foundational concept in hydrology and ecosystem service modeling. The instructor has referenced this concept very briefly before, but it deserves more thorough explanation. If students imagine having a country, let us just pretend this is an island country. Like most islands, this country is probably volcanic, so there is probably a mountain right in the middle of it. Just pretend that mountain represents the topography of the island.

Understanding Watershed Delineation

If students want to know what the watersheds are for this island, and they identify the peak, they can think about defining a watershed by asking the question: if a drop of water landed at some random location on the island, where would it flow? The answer depends entirely on the valleys and the slopes. The water would flow down through the valleys and eventually flow out to the ocean. Now if students ask about some other point on the island, that water is going to flow down some other valley depending on the local topography. These flow paths are not stream networks, although initially they might look like a simple network of streams. At some point the flow accumulation gets so substantial that it actually does become a recognizable stream. But basically, a watershed is going to be defined by the very bottommost point where water flows out and the whole area that flows into that outlet point. It is basically just a catchment area, which is the same concept students learned about before when discussing the sediment retention model.

Sub-Watersheds and Hierarchical Thinking

The cool thing is that students can also think about sub-watersheds as a hierarchical concept nested within the larger system. What matters is that this is the whole watershed because the water flow really ends there at the outlet, but students could also conceptually ask what would happen if they started counting from some higher point upstream. They could then draw the sub-watershed, which would be the subset of the larger watershed of all the points that flow into that intermediate point, which will then obviously continue flowing on down to the ultimate outlet. This hierarchical nesting of watersheds is really important in hydrology.

Applications to Ecosystem Service Modeling

Hydrological engineering comes in here as a key consideration, and lots of different models operate on a watershed-by-watershed basis because water flows downstream and interacts with other water and materials at different scales. Students might be thinking, what happens in the chemical mixing as the nutrients and other things mix with all the other chemicals coming in as the water travels down the stream and eventually makes it out to the ocean or into a reservoir? So the InVEST model is going to look at these results in terms of both the full watershed and the sub-watershed level. For the Nicaragua data that the instructor is using, the watershed and sub-watershed files are nicely labeled, making identification straightforward. The first one for students to select is the watershed file, and then for the other one, students select the sub-watershed file. The instructor indicates they are going to circulate around the classroom and check on people’s data to help troubleshoot, but students can also keep powering ahead if they want to.

Data Formatting Across Countries

Some of the countries are not formatted exactly the same way as others in the data packages provided, but so far everything is looking great as students load their data. The data should be relatively consistent across countries, but students may notice some variations in naming conventions or file organization.

Running the Water Yield Model

Optional Demand Tables

For the optional demand tables, students could run the water yield model without including these, but the model prompts students to find them. The instructor actually thought the demand table should be there but is not locating it immediately. The instructor asks if anybody else has found the demand table in their country-specific data yet. Since the demand table is not being located, the instructor decides to run the model without it for now, and they give the go-ahead for students to do the same thing.

Practical Considerations for the Final Project

For students’ final projects, locating and properly incorporating the water yield model is going to be one of the lifting points if they want to include this particular ecosystem service in their analysis. Figuring out how to work with this model and its various data requirements will be an important skill, and the InVEST user’s guide is going to be absolutely essential for this work. The instructor has already clicked Run on their model, so students can do that as well and see what happens with their results.

Processing Time and Data Scale Considerations

For the very first time in the class, students are not all running on the exact same data. This is where students get to discover if their country is large or small in terms of how long the model takes to run. One of the things that much of the instructor’s professional life has been focused on is getting models to run faster on computers, trying to optimize code and algorithms for efficiency. The instructor chose a very small country with Nicaragua, so the model only took 4.18 seconds to complete. But as students get to bigger and bigger countries, no matter how fast their computer is, it starts to become a real big data challenge that cannot be overcome by hardware alone. The instructor asks who has the largest country among the students. Mexico emerges as the largest country, and the instructor checks whether that student is still waiting for their model to run. The real reason the instructor wanted to do this analysis in class is to inoculate students to the reality that geospatial analysis can take time. This is kind of a first stress test of students’ computers to see how they handle intensive processing. If students really want to throw in the towel and choose a smaller country instead, the instructor says maybe they will allow it, but they encourage students to see if they can make it work. Students should let the instructor know if they have any troubles. The model should generally be able to run in a reasonable amount of time for most countries. But if students have one of those really big countries, they might not get results in the time available right now. What those students will have to do is plug in their computer to power and wait overnight or for an extended period for the model to complete. If students think they are not going to get to a battery charging facility for their computer, they might want to hit pause on the model run. The instructor has had that happen before where they were running a model and they could not drive home fast enough to get their computer plugged back in before the battery died. When using all the cores on the computer’s CPU, the device uses the battery way faster than normal operation. Only one student has finished so far, which suggests the instructor’s prediction about processing time was accurate, so the group is doing well. The instructor decides to take a look at the results from that completed model run.

Interpreting Water Yield Model Results

Understanding the Output Structure

This is one of the models where the developers of InVEST have not yet created an automatic report generation system. The other ecosystem service models had two buttons at the completion screen: one labeled Open Workspace, which opens the folder where results are stored, and another labeled Open Report, which provides that nicely formatted HTML document with all the results nicely laid out. This water yield model is one where students just kind of have to do it themselves using the output files. What students see after running the model is they chose Nicaragua, or whatever country they are working with. The instructor might have been wise to put it in a separate folder from the input data, but the data structure is kind of nice in that it has an output data folder. That folder is where all the actual results are stored.

Visualizing Watershed Results in QGIS

The instructor is going to open up their QGIS and show students the first and most basic thing they might want to do with shapefiles in QGIS, which is to load and visualize them. They are going to start with the easy one: watershedResultsWyield.shp. The instructor is going to drag that file over into their QGIS workspace to display it. Here are all the watersheds present in Nicaragua displayed as polygons on the map. Because the class is also learning the basics of QGIS, the instructor wants to point students to some of the key things they can do with shapefiles. Like before, with raster files, if a student loses a file altogether, they can go to zoom to layer if they are pointing at somewhere that does not look like it has any data. They can also select specific polygons with a selection tool here. Another useful thing is this tool here with the little information icon. If students click that and then click in a polygon, it is going to show them all of the information and attributes associated with that polygon. It might be too much information about it, but this shows students what data are being stored in this shapefile. The shapefile has these polygons, and for each polygon, it records a bunch of extra information and attributes. If students become a GIS expert, they start to learn a lot about what these different attribute fields are and represent. The instructor just points students to the end of the attribute list, and they can notice that there are fields named water yield MN and water yield volume. These are the key results that InVEST generated and stored in the shapefile.

Viewing the Full Attribute Table

The instructor also wants to show students how to look at the table of all the results at once. So instead of just clicking on one polygon to see its attributes, what if students want to see results for water yield for all the polygons simultaneously? For that, students should right-click on the layer name and go down to Open Attribute Table. What students will get is a table view where each row is the data associated with one of the polygons. Before, when the instructor clicked on a polygon, it was essentially just showing that single row. But now students are seeing all the different polygons and all their associated data in a tabular format. Another fun thing students can do is click through this table, and it will highlight which of the polygons it is representing in the map. Looking here at the attribute table, a lot of this pre-calculated stuff was not generated by this particular model run, but the class is going to skip over that. What the class really cares about is if students scroll all the way to the right of the attribute table. That is where the water yield volume column is located. This column is going to show what is literally the volume of water measured, which the instructor believes is in cubic meters, though students can refer to the InVEST user’s guide if they want confirmation. But this is the key model output saying that given everything the model knows about evaporation, transpiration, root interactions, and all the other hydrological processes, this is how much water flows into the bottom point of each of the sub-watersheds. The instructor sees that some of the watersheds do not have the sub-watershed data, so students might want to load up that secondary result, the subwatersheds.shp file, and look at it there. But the point is, this water yield volume is the key result they have been working toward.

Connecting Biophysical Results to Economic Value

Tying this back to how the class has been talking about ecosystem services throughout the course, students should recall that there is the ecosystem structure. From there, it produces, through an ecosystem service production function, some level of biophysical ecosystem service provision. That is what we have in this column here: the volume of water. The ecosystem service value provision is the thing that students can then, hopefully, multiply by some price or other monetary metric that brings it from biophysical terms into economic terms that policymakers and the public can understand. Water yield is a biophysical variable, not an economic one. Water yield is measured in physical units like cubic meters. But students might think that for water, it probably does have some sort of connection directly into the economy, especially if that water is used for hydroelectric power generation or drinking water supply. After viewing the results, the instructor is going to circulate around the classroom and see the status of everybody’s model runs and check whether they are having any issues.

Technical Reminders for QGIS Users

A good question has come up: if students have not gotten the attribute table up on their screen like the instructor has shown, just as a reminder, they should go to their Layers tab and right-click on the layer that they want to look at and go to Open Attribute Table. The attribute table is essential for viewing the full results. Some people are asking about the icons for different tools being in different locations on people’s computers, which can make it confusing to find tools. The tool that students want to select if they want to select a specific polygon is the one that has the pointer arrow inside a little box next to a bigger box. That is the polygon select feature. Another thing to note is that students can select multiple polygons at once. That can be useful for various purposes in their analysis too.

Exporting Subsets of Spatial Data

This class is noted as being way above average in terms of tech competency, which is very good and will serve students well in their future environmental work. The last thing the instructor might say, in terms of the sort of bonus training in GIS that students are getting from this class, is that oftentimes maybe students only care about one sub-watershed, or maybe they have a map of the world and they only care about one country. In those cases, QGIS gives a very easy way to create new shapefiles containing only the subset of data that matters. Say students only care about this watershed, this watershed, and this watershed. If students select the ones they care about, they can right-click on the layer that they have loaded and hit export. This is where it gives them the option to save features as. What is really nice is it defaults to geo package instead of shapefile format. The instructor threw a lot of shade at the shapefile format earlier, but they have been too busy prepping for their Chile trip to actually fix the data layers for the students, so they are kind of a hypocrite about this. Either way, when it saves it, it will be in a geo package by default, but that is what students could do if they like it old school and prefer to work with shapefiles. The instructor wants to make sure students have no questions about this process before moving on.

Working with Attributes in QGIS

The instructor reminds students about opening the attribute table to view all features at once: select the layer, go down to Open Attribute Table. The terminology is a little bit different in QGIS than in ArcGIS. In QGIS, they call it attributes instead of features, which can be a source of confusion for students who have worked in ArcGIS before. Unless there are any remaining questions, the instructor wants to pivot to the really important topic of what do students actually do with these water yield results.

Valuation of Water Yield: Hydropower Case Study

From Biophysical to Economic Values

The instructor wants to pivot to the topic of what students do with the water yield results they have generated. Students got the volume of water, right? What might students want to do if they are going to a policymaker and making an argument that some sort of environmental restoration program is or is not worth it from an economic standpoint? Basically, they are trying to do a cost-benefit analysis, but now properly including all these other values that are either ignored as externalities or simply ignored because we do not even know the value, which is even worse than an externality. An externality is something that you can at least see as something that we do not have people valuing, but we know it is there and it has effects. This is even worse—an invisible value that most people do not even know exists.

Hydropower Valuation Framework

The valuation method from the ending point of InVEST is really straightforward because it is basically market plus physics. We have a pretty clear idea of why we spend so much money having our Army Corps of Engineers and similar agencies building dams. When you create a dam and put it at the bottom of a watershed or sub-watershed, you then know that based on how much water you let through, some subset of this water is going to fill up to the height of the dam. In principle, if you had a super high dam that was higher than all the elevation around it, it would eventually fill up the whole thing, but that would be a pretty bad idea because now you would have a really massive spill on your hands. But the point is, however high we build this dam determines how much reservoir capacity there is. That capacity directly captures and constrains the results we just computed in InVEST. As we have got our water volume for each one, we will need a little bit more information then on what is H sub D, which is the water height behind the dam at the turbine level.

Physics of Hydropower Calculation

There are some other coefficients that go into the hydropower calculation, like gravity. That is the acceleration approximately 9.81 meters per second squared. This constant comes up in a lot of physical calculations. Gravity determines how much electricity we can get from falling water because potential energy is mass times gravity times height. Also, rho, this is water density. By definition, it is 1,000 kilograms per cubic meter, because the actual scientific definition of density comes from water’s density as the reference standard. That is a little bit of what goes into the calculation. Students would essentially need to get information from the government agency responsible for managing the dams in their country. That information is actually pretty easy to get in most cases through public records or agency websites. There are a few additional things that are a little harder to find. Students need to know about the turbine efficiency: what is the percentage of inflow water volume at the reservoir that will actually be used to generate electricity? Most dams let through a large portion of the water without ever using it to generate electricity because there might be more water flowing through than the dam has capacity to put through the generator.

Economic Calculation of Hydropower Value

The final calculation would combine all of the economically relevant components to determine the total value from hydropower. What do we get? The price of electricity multiplied by this epsilon D that we calculated above, which came from the previous two equations about gravity, density, head, and turbine efficiency. That gives us quantity of electricity. That is just price times quantity minus total costs. Now we are back to basic economics, right? This is total revenue minus total costs, the standard economic profit calculation. Then we are going to add one extra term: the discounting factor. Students have seen this concept a bunch of times throughout the course. However far out into the future we are thinking that this flow of value is going to be accrued, this term will get larger and larger with time, which makes the discounting term get smaller. This means that dollar values in the future, farther out in time t, are going to be worth less in present value terms. This discounting accounts for human time preferences and the opportunity cost of capital. This gets us the net present value of hydropower at that dam, which is a single number that can be used in policy analysis.

Policy Relevance of Hydropower Valuation

What makes this relevant to policymakers is that if there is ever a cost-benefit analysis where something is going to disrupt the hydrological cycle—or maybe we are going to divert water to irrigation and it will not make it into the hydropower dam downstream—a model like this lets us more accurately compare the costs and benefits of that decision. Has anybody heard about the project to pipe water from Lake Superior down to Arizona? That is a big example of this kind of large-scale water diversion where hydropower impacts would matter. These are huge diversion projects that could have enormous hydropower implications. The instructor does not think that particular one is going to go through, but this is something that actually does happen in various places around the world. There is going to be an example of local costs and benefits differing from the larger scale costs and benefits in water diversion cases. That is the valuation method for hydropower, which is one important way to put economic value on the water yield that the InVEST model produces.

The Value of a Statistical Life

Introduction to VSL Concept

That is the valuation method for hydropower, which leaves us with just 10 minutes to talk about the very last topic the instructor wants to cover in this lecture: the value of a statistical life. What is kind of nice is that in calculating the value of a statistical life, which students will henceforth just call VSL because you see that abbreviation a lot, they will find that the basic methods are already familiar. It is going to be hedonics in most cases, just applied to a new context.

Hedonic Analysis Refresher

Students might be thinking back to hedonics from earlier in the course. They talked about what is the value of a lake. They talked about how different amenities on the lake—like whether there is a boat launch or how clear the water is—looking at the changes in the clarity of the water and how that affected sale prices of houses on that lake, gave them information about how much people cared about water quality. That hedonic analysis is hard to get and requires a lot of data, but researchers can do the exact same approach to get how much people care about their own life.

Individual Perspectives on Valuing Life

The instructor acknowledges that some people have cognitive dissonance putting a dollar value on a life, and they wonder if that seems wrong to people in the class. When you put it in a specific person’s terms, it becomes a really hard-to-assess thing. But the fact of the matter is, the government does this calculation all the time in various policy contexts. What is sort of interesting is that different agencies have different numbers. The Pentagon has a much lower price or value that they put on a statistical life than many other agencies, and it is actually quite relevant to them because they do lose lives in military operations. They do not use hedonic analysis; they use replacement cost instead—essentially, how much does it cost to train up a soldier? The instructor can sort of see the logic at least from a decision-making metric, even if it feels cold. Oftentimes, though, people are not the military, and they are caring about environmental things instead. How can they do the valuation without looking at it from the perspective of replacement cost? Well, there has been tons of really awesome academic literature on the point that you can use people’s observed market behavior in job markets to determine how much they care about, in fact, their own life.

Job Market Differentiation and Risk

This comes from the fact that, just like with a house or a lake, there is going to be a big bundle of different attributes that people care about with respect to any given job. For a job, it might be what are the responsibilities, do people get to be a supervisor, do they need to travel, can they work from home? That last one is a big one now, especially after the pandemic. What are the hours? But then, critically, there is this one that is kind of unique in the context of labor markets: risk of death or injury. We do not think about this too often in most of our daily lives. A lot of the jobs that people who are college-educated take on tend to have essentially zero risk of death during normal work. But there is actually a ton of data about risk in jobs because some jobs are genuinely risky. Things like working on an oil rig or driving a truck through a war zone have significant risk. There is the TV show Ice Road Truckers—a genuinely risky job. Another one is The Most Dangerous Catch, where fishermen are catching fish in the North Sea in very dangerous conditions. That is also very risky and documented in popular media. Any job has some sort of inherent risk of death or serious injury, though this risk ranges dramatically from nearly zero for desk workers to very substantial for certain occupations.

Using Wage Differentials to Estimate VSL

We can leverage this fact and all the different observations on how much these riskier jobs pay their workers to determine, in the same way that hedonic analysis was used for house prices and water quality, how much a human’s life is worth in terms of willingness to accept risk. Basically, the way it works is researchers collect data on all those job characteristics—the things that might matter for wage determination, like are you a supervisor, what are the hours, how far do you commute? It is kind of like how many bathrooms in a house or whatever—because researchers are trying to isolate the effect of the risk. They use a statistical model that predicts wage as a function of multiple variables including education of the workers, physical attributes, hours worked, distance to the job, whether work can be done from home—a big one these days. But then researchers include all these things trying to describe all of the attributes that go into predicting wage. But then one last one: risk. This is what researchers call the coefficient of interest if you do statistics.

Specification and Interpretation of VSL Models

Assuming that you have got all of these things correctly identified and that you did not leave anything important out—if researchers left out hours, that would be really bad because now the estimate for the risk coefficient would be picking up on the effect of hours instead—if the model has been well specified, then researchers can look at how different jobs with different risk levels affect the wage. If there is a 1% increase in risk of death, researchers would be solving for the amount by which that 1% increase in risk would lead to a whatever percent increase in the wages. This is the key coefficient that comes out of the regression and allows the valuation of statistical lives. This relates to environmental economics because all sorts of different things that we do—such as cleaning up the air or cleaning up a toxic spill—oftentimes much of its value is not through ecosystem services like sediment retention, but simply through the fact that it keeps people from dying. The mortality reduction channel is often the dominating economic value, even if ecosystem services also have value.

Case Study: Environmental Justice and the HERC

The instructor will give one example: the HERC, the Hennepin Energy Recovery Center. It burns garbage and is very unfortunately located right in the middle of the city, right next to a bunch of low-income residents. There is a massive environmental justice aspect to this facility because it imposes costs on people who did not choose to live there and did not benefit from the waste disposal service. How would researchers use the tools of VSL to establish the costs and benefits of a policy to close or replace the facility? The benefits are clear—it stores our garbage and burns it, managing the solid waste stream. But what are the costs to the surrounding residents? This is a stylized example, but suppose we would have 10,000 people that are exposed to the emissions from this facility. The policy, in this case getting rid of the HERC, would reduce mortality risk by some amount that could be estimated from epidemiological studies. If researchers further knew from hedonic analysis that people value a risk reduction of 0.01 at $200, researchers can then simply multiply two different things together. The number of residents multiplied by the risk and by the willingness to pay per unit of risk reduces the abstract risk into concrete health impacts. The calculation the instructor has sort of jury-rigged the numbers here so that it comes out nicely shows that this risk with that level of exposure leads to one statistical life saved by doing the policy. The second thing researchers need to know is the value of that one life. With a $200 willingness to pay per unit of risk, for all 10,000 of those residents, the total willingness to pay across the population can be calculated. This suggests that the value of a statistical life multiplied by the one statistical life saved says that getting rid of the HERC, these are numbers that are obviously not real but are for illustration purposes, would give researchers an estimate of the health benefit value. If the replacement of the HERC costs more than this calculated health benefit, that is important information. If it costs less, then closing the facility passes a basic cost-benefit test on health grounds. It does not say anything at all about the environmental justice component, and that is where there is a real caution. But it is just a good example of how researchers can use this VSL concept in some really meaningful debates about pollution and health.

Values and Range of Statistical Life Estimates

What the instructor will end with is that there is tons of data out there. The class will return to this topic at the beginning of next lecture with more depth. There is really rich literature on what is the value of a statistical life, and different studies give different numbers depending on the context and populations studied. Just so that students do not leave without this information, it ranges from a little bit less than a million dollars per life to higher-end values looking at around $20 million. That is the range of estimates for how much the economy or researchers values a human life in different contexts. These numbers shape real policy decisions about pollution control, workplace safety, and many other things.

Conclusion and Next Steps

Class Performance and Data Troubleshooting

Have students had any questions about their data running through their country models, as it looked like everybody was pretty successful with their model runs? This class session was also structured to give students practice troubleshooting their country-specific data and all the challenges that come with real-world datasets. Either way, feel free to reach out to the instructor if you had any issues. Now is a good time to get help, and the instructor will be available to assist. Their next class meeting is coming up soon, and they will see all students later.

Welcome to Day 4: Valuation Methods

Welcome to Day 4 of our discussion on valuation methods. Today’s session will cover the completion of the Value of a Statistical Life (VSL) analysis, followed by an interactive classroom game to reinforce these concepts. We will then discuss the various policies that emerge from different VSL estimates and explore other non-market valuation methods, including ecosystem services. This lecture represents a significant portion of the course material on valuation components.

Completing the VSL Framework

The Literature on Statistical Life Valuation

A vast and extensive literature exists on establishing the value of a statistical life through published, peer-reviewed studies. The critical methodological distinction in this body of work centers on the approach used to estimate these values. Almost all modern studies derive their estimates from labor market analysis, though a smaller subset employs contingent valuation methods. Understanding this methodological foundation is essential for comprehending how economists arrive at VSL estimates and the strengths and limitations of each approach.

Contingent Valuation Versus Labor Market Analysis

Contingent valuation represents a form of direct questioning where researchers ask people about their willingness to pay or accept compensation for various outcomes. When the topic concerns matters as critical as life and death, this method becomes especially challenging to execute accurately. People struggle to provide reliable responses when asked directly about their valuation of life itself. Consequently, the field has progressively shifted toward labor market analysis, particularly in more modern studies. This methodological evolution reflects both the practical challenges of contingent valuation and the theoretical advantages of observing actual market behavior.

The Labor Market Approach to VSL Estimation

The labor market method focuses on analyzing how workers respond to risky employment opportunities. Specifically, economists examine whether workers demand higher compensation to accept jobs with greater mortality risk. This revealed preference approach operates on the principle that workers’ actual decisions in the marketplace reveal their implicit valuation of risk. Rather than asking people hypothetically what they would pay, researchers observe real wage differentials between risky and safer jobs and use these differences to infer valuations of mortality risk.

A Worked Example: Understanding the VSL Calculation

Setting Up the Problem

Consider a straightforward example that illustrates the fundamental calculation underlying VSL estimation. Suppose a worker demands five thousand dollars more in annual compensation to accept a job with significantly higher mortality risk compared to a safer alternative position. We need to establish two pieces of information clearly. First, we need to identify the willingness to accept a risk premium, which represents the wage differential necessary to induce the worker to take on additional risk. This premium is not simply the total wage for the risky job, but rather the difference between what the worker would require for the risky job versus an unrisky alternative. Second, we need to specify the actual mortality risk associated with the risky job. Suppose the job carries a one percent probability of death annually. This means that on average, if one hundred workers accept this job, one of them will die.

Calculating the Value of a Statistical Life

The calculation of VSL from this information follows a logical progression. If workers will only accept the risky job for an additional five thousand dollars compensation, and if a one percent mortality risk means that among one hundred workers one will die, then the value of a statistical life is simply the risk premium multiplied by the inverse of the risk probability. In this case, five thousand dollars times one hundred equals five million dollars. This calculation reveals that workers implicitly value a statistical life at five million dollars based on their wage-risk trade-offs.

Why This Approach Matters

This methodology provides a more satisfying approach to valuing life than direct questioning because it is based on observed marketplace information. Real people are working real jobs and making actual decisions about accepting or rejecting risky positions. Economists can extract from the different prices paid in these markets the real premium that people place on their lives when accepting risky employment. The behavior revealed through actual labor market choices provides a foundation grounded in genuine economic decision-making rather than hypothetical responses to survey questions.

The Abalone Fishing Game: A Classroom Demonstration

Setting the Stage for the Experiment

To reinforce these concepts with greater specificity and provide a hands-on learning experience, we conduct an interactive classroom game centered on abalone harvesting. Abalone are mollusks that live underwater in shells and are harvested by divers who feel around underwater with their hands in murky water to locate them. They represent a delicacy and, for our purposes, an example of a genuinely risky occupation. In this classroom experiment, the instructor takes the role of a fishing boat captain who already owns the boat and equipment but requires workers to do the dangerous diving. The captain will hire four divers whose task is to harvest abalone and bring them to market.

The Physical Setup and Game Materials

The physical setup for this game uses common household items to simulate fishing grounds. The primary constraint was finding items that looked and felt different while representing different outcomes in the game. Flavored Keurig coffee pods serve as the base materials for this experiment. The good fish, representing valuable and harvestable abalone, are the vanilla, caramel, and mocha flavored pods. These are desirable outcomes. The bad fish, representing dangerous electric eels that will harm the divers, are the maple pecan flavored pods, which the instructor identifies as the worst possible coffee flavor. The goal of the game is for workers to harvest three abalone each to validate their compensation.

How the Game Works

The game operates through a bidding and labor market clearing process. Workers must write their names on a sheet and choose a wage between one and five dollars that represents the compensation they would require to become a diver. The game proceeds across three different rounds, each representing different fishing grounds with different risk profiles. The captain will then choose workers based on the bids, with the market clearing at a wage where the captain can attract exactly four workers. Workers who bid at or below the clearing wage are selected to participate. If a worker bids below the clearing wage, they benefit because they receive the higher market wage. Compensation is provided in class points, which can be used to skip homework assignments or restore late or incomplete work to full credit value. If a worker successfully completes their job by drawing three good fish, they receive class points equal to the wage they bid or the market wage, whichever is higher.

Safety Valves and Risk Mitigation

Because the game involves some workers being uncomfortable with the risk component, there is a built-in safety valve. If a worker fears the fishing experience, they can simply bid five dollars, which is set above the expected market clearing price. The market will clear somewhere below this threshold, meaning that if workers bid the maximum, the captain will need to find other workers willing to work for less. Additionally, workers who participate but die during the experiment, though they lose their class points for that round, can continue in the class without penalty. This eliminates any genuine consequence beyond the academic incentive.

The Three Fishing Areas and Their Risk Profiles

The game features three distinct fishing areas with progressively increasing risk levels. In Area 1, all nine items in the fishing pool are good abalone with no dangerous eels. This represents a completely safe fishing ground where divers have a one hundred percent chance of drawing a good fish on each draw. Since workers must draw three times, the probability of successfully completing the job is one.

Area 2 introduces moderate risk by including one maple pecan eel among nine total items. This means that ten percent of the items are dangerous eels. To calculate the probability of being electrocuted when drawing three times, we multiply the probability of drawing a good fish across all three draws: 0.9 times 0.9 times 0.9 equals 0.729, which means there is a 27.1 percent chance of drawing at least one eel and being electrocuted.

Area 3 represents a truly dangerous fishing ground with three eels among ten total items, meaning thirty percent of items are dangerous. The probability of survival when drawing three times is calculated as 0.7 times 0.7 times 0.7 equals 0.343, which means there is a 65.7 percent chance of drawing at least one eel and being electrocuted. This represents a substantially riskier job.

Labor Market Theory Applied to the Game

This classroom experiment can be directly mapped onto fundamental microeconomic theory. On the vertical axis of a standard labor market diagram, we would plot wages ranging from one to five dollars. On the horizontal axis, we would plot the number of workers. The firm, represented by the fishing boat captain, has a fixed demand for exactly four workers, represented as a vertical line at four workers. This labor demand does not change regardless of wage because the captain needs exactly four people to harvest twelve abalone at three per person.

On the supply side, individual workers will determine their willingness to work at different wages based on the risk profile of the job. Workers have varying risk preferences and requirements, so the labor supply curve will slope upward, indicating that higher wages are needed to attract more workers. The market will clear where the labor supply curve intersects the fixed demand, determining the wage at which exactly four workers are willing to participate. The captain, as a profit-maximizing firm, will set the wage at this clearing point. If the wage is set too low, fewer than four workers will volunteer. If the wage is set too high, the captain will not maximize profit. The equilibrium wage emerges where supply meets demand.

Running Round 1: The Safe Fishing Ground

In the first round, workers are asked to write down the wage they would require to fish in Area 1, where all nine items are good abalone and there is no risk of electrocution. After collecting all bids, the captain announces the clearing wage. In a typical result, five workers indicate they are willing to work for two dollars. Using a random selection mechanism, the captain selects four workers to proceed. These four workers come to the front of the classroom and each draw three items from the pool. Since all items are good abalone, all workers successfully complete their draws and receive their payment in class points.

Running Round 2: Moderate Risk

Round 2 introduces risk by adding one maple pecan eel to the nine abalone. Now there is a ten percent chance of drawing the eel on any single draw. Workers are asked to indicate the wage they would require to participate in this round, knowing that the risk profile has changed. When workers announce their bids, a typical result shows that most workers maintain their bids at two dollars, indicating that they do not perceive the ten percent risk as substantial enough to warrant higher compensation. However, some workers may increase their demands to three or four dollars. The captain selects four workers based on the market clearing wage, which typically remains at two or three dollars.

The selected workers come forward and draw three times each. With a ten percent chance per draw, approximately one worker may draw an eel and be electrocuted. That worker receives no class points and must sit out the remainder. The surviving workers typically all complete their draws without electrocution and receive their class points. This round demonstrates how workers respond to the introduction of moderate risk with minimal wage increases in many cases.

Running Round 3: High Risk

In the final round, the risk profile increases substantially. The captain removes two abalone and adds two more eels, creating a pool with three eels and seven abalone. Now thirty percent of the items are dangerous. Workers are explicitly told that if they are electrocuted, they receive nothing, and they must carefully consider whether the compensation justifies the risk. When the captain asks for bids, the results typically show a substantial shift in worker preferences. Fewer workers are willing to work for one or two dollars. More workers demand three or four dollars to compensate for the increased risk. Some workers join the five-dollar group, essentially opting out entirely.

The captain selects four workers based on the new market clearing wage, which has typically increased to three or four dollars. These workers come to the front and draw three times from the pool. With a sixty-five percent chance of electrocution, multiple workers are likely to draw an eel. Those who do receive no compensation. The survivors typically all receive their class points at the higher wage. This round viscerally demonstrates how workers adjust their compensation demands when faced with substantially higher risks, consistent with the theoretical predictions of labor market economics.

Deriving the Value of a Statistical Life from Game Results

Collecting and Organizing the Data

Once the game concludes, the instructor compiles the results by recording the market clearing wage for each of the three rounds. The labor supply is revealed through the number of workers willing to work at each wage level in each round. By plotting these points, an upward-sloping labor supply curve emerges, satisfying the law of supply. The data shows that workers required increasing compensation as the risk increased, consistent with economic theory.

The Two-Step Calculation Process

To derive the value of a statistical life from the game results, we employ a two-step process. First, we calculate the wage increment, or risk premium, by taking the difference between the clearing wage in the riskiest area and the clearing wage in the risk-free area. In a typical year, Area 1 cleared at a wage of two dollars, while Area 3 cleared at a wage of three dollars and twenty-two cents. The increment is therefore three dollars and twenty-two cents minus two dollars, which equals one dollar and twenty-two cents.

Second, we combine this wage increment with the actual risk metrics. In Area 3, the probability of death is sixty-five percent, or 0.665 as a decimal. We calculate the inverse of the risk by dividing one by 0.665, obtaining approximately 1.504. We then multiply the wage increment by the inverse of the risk: one dollar and twenty-two cents times 1.504 equals approximately one dollar and eighty-five cents. This represents the value per person. However, the value of a statistical life is expressed as the value for one death across a population. We therefore multiply this per-person value by approximately one hundred to obtain the total value of a statistical life in the relevant units.

Historical Results and Consistency

The instructor and previous instructor have maintained meticulous records on this classroom game for several years, including data from 2025, 2024, 2023, 2022, 2021, and earlier years. Despite variations in class size and student composition, the results consistently show the same patterns. When people play this game using relatively low stakes, they systematically require more compensation to accept higher levels of risk. The derived values of a statistical life from these classroom experiments fall within ranges that are plausible given labor market research. The consistency across years and classes suggests that the patterns observed reflect genuine economic behavior rather than idiosyncratic results.

The Hedonic Pricing Framework

This entire exercise illustrates the powerful technique of hedonic pricing, or hedonics. Hedonics is a method for extracting implicit prices from market data by examining how the market price varies with different characteristics or attributes. In the context of housing, hedonics might examine how the price of a house varies with access to fishing grounds or a view of a beautiful park. In the context of labor markets and mortality risk, hedonics examines how wages vary with the mortality risk associated with different jobs. Whether examining houses or risky jobs, hedonics provides a powerful methodology for eliciting these implicit prices. This technique has become extensively used throughout the economic literature on valuation.

Policy Applications: Air Quality and Valuation

The Epidemiological Foundation

One of the most significant real-world applications of VSL estimation concerns air quality policy. The Environmental Protection Agency identifies key pollutants that require monitoring and regulation, with particulate matter smaller than 2.5 microns (PM2.5) being among the most important. PM2.5 comes from various sources but primarily originates from emissions produced by burning coal. This particulate matter enters the lungs and causes premature mortality. One of the seminal articles in this field compiled epidemiological evidence about the relationship between PM2.5 concentrations and health outcomes, then multiplied these health effects by VSL estimates to monetize the policy benefits.

Calculating Avoided Mortality and Benefits

The epidemiological research conducted a literature review to identify evidence on the number of deaths per one hundred thousand people at different PM2.5 concentrations. The concentrations were measured in micrograms per cubic meter, with levels like thirteen, twelve, and eleven considered. The research calculated the “avoided mortality” at each concentration level, meaning the number of lives that would be saved by reducing pollution to that concentration compared to a higher baseline. For adult mortality, reducing PM2.5 to thirteen micrograms per cubic meter avoided one hundred forty deaths per one hundred thousand people. Reducing it further to twelve micrograms per cubic meter avoided four hundred sixty deaths per one hundred thousand. Reducing it even further to eleven micrograms per cubic meter avoided fifteen hundred deaths per one hundred thousand.

Beyond Mortality: Measuring Morbidity Benefits

While the mortality benefits form the core of the analysis, the research also examined a range of non-fatal health outcomes. Infant mortality was examined, though the numbers were substantially smaller than adult mortality. Beyond mortality considerations, the research quantified benefits including avoided non-fatal heart attacks, hospital admissions for various respiratory and cardiac conditions, emergency room visits for asthma and other respiratory conditions, and lost work days due to illness. These non-fatal outcomes represent genuine health improvements that people care about. The research recognized that while these outcomes do not result in death, they nevertheless represent significant harms that deserve to be valued in a policy analysis.

Personal Experience and the Severity of Air Pollution

The impact of air pollution on health and quality of life becomes concrete when considering acute asthma responses to pollution. Many people have experienced the terrifying sensation of unable to breathe during exposure to heavy pollution or air quality events. In one such instance, while traveling in Africa, the local practice was to burn garbage at a designated time after work ends. The burning occurred all at once and created a dense haze of pollution. Exposure to this pollution triggered hyperventilation and a struggle to obtain sufficient air, creating a genuinely frightening experience. Though the person ultimately was fine, the experience illustrates why people care deeply about air quality and why non-fatal impacts deserve consideration in policy analysis.

Monetizing All Health Benefits

Once the epidemiological relationships between PM2.5 and health outcomes are established, researchers can monetize all these benefits by applying appropriate valuations. For mortality outcomes, they multiply the number of avoided deaths by the VSL. For non-fatal outcomes like heart attacks, hospital admissions, emergency room visits, and lost work days, they apply unit values derived from other sources or studies to monetize these health improvements. The result is a comprehensive picture of the total health benefits at each pollution reduction level, expressed in monetary terms. This allows for comparison with the costs of pollution abatement, following the cost-benefit framework discussed in earlier lectures.

Extending VSL Beyond Human Lives

The Value of a Statistical Dog Life

The valuation methodology is not limited to human lives. One creative research paper applied contingent valuation methodology to estimate the value of a statistical dog life. Using surveys where respondents were asked their willingness to pay to reduce risks to dogs, researchers found that people value a statistical dog life at approximately ten thousand dollars. This application demonstrates that people care about the welfare of other beings and will express this care through monetary valuations.

Why Hedonic Analysis Cannot Be Applied to Dogs

When considering why hedonic pricing cannot be applied to estimate the value of a dog’s life, the key issue becomes apparent. Hedonic analysis relies on observing market choices and trade-offs. Dogs do not participate in labor markets. They do not make choices about whether to accept risky employment in exchange for compensation. Researchers cannot observe dogs’ revealed preferences through their market behavior because dogs do not engage in the types of economic transactions that generate the necessary data. Consequently, contingent valuation, which directly asks people about their valuations, becomes the only practical methodology for estimating the value of a statistical dog life. This limitation highlights that hedonic analysis, while powerful when applicable, requires observable market transactions involving the entity being valued.

Critical Examination of VSL Methodology: Strengths and Limitations

The Virtue of Observed Market Behavior

The fundamental strength of hedonic valuation of a statistical life is that it is grounded in observed behavior in actual markets. Workers make real decisions in the marketplace, accepting or rejecting jobs based on the wages offered and the risks involved. These actual choices reveal people’s genuine preferences regarding mortality risk in ways that hypothetical questions cannot replicate. The wage-risk trade-offs observed in actual labor markets provide a foundation for VSL estimates that reflects genuine economic decision-making.

The Problem of Perfect Information

Despite the virtues of labor market analysis, significant limitations require careful consideration. The first critical assumption is that workers have perfect information regarding the risks they face. In reality, workers often systematically underestimate risk, particularly when risks are relatively small. Behavioral evidence strongly suggests this assumption is frequently violated. Workers may not have accurate information about occupational mortality rates, may not understand or internalize small probability risks, or may base their risk perceptions on biased information or cognitive errors. If workers lack accurate risk information when making their job choices, the resulting wage-risk trade-offs do not reflect informed decisions and may not provide reliable valuations.

The Problem of Non-Representative Risk Preferences

A second significant issue concerns whether the workers who accept risky jobs have risk preferences representative of the broader population. Consider workers on off-shore oil derricks or cowboys engaged in inherently dangerous occupations. These professions often develop a particular culture that celebrates danger and risk-taking, potentially attracting individuals with unusual risk preferences who are more willing to accept danger for lower compensation than the general population. If the workers willing to take risky jobs are systematically different in their risk preferences, using their wage-risk trade-offs to estimate values for the general population may produce misleading estimates. A revealed preference in a self-selected group of risk lovers may not generalize to the population as a whole.

The Issue of Uncompensated Externalities

When a worker accepts a risky job, that worker bears the personal cost of potentially dying. However, the costs of a death extend far beyond the individual. The family members and loved ones of the deceased suffer significant emotional and financial harm. The employer and coworkers lose a productive member of the workforce. Society loses whatever contributions the deceased would have made. These external costs are not borne by the worker and therefore do not factor into their wage-risk trade-off. If workers only consider their private cost and benefit when deciding whether to accept a risky job, their wage demands will not capture the full social cost of the risk, leading to an underestimate of the true value from a societal perspective.

The Age Question: Differential Valuation Across Lifecourse

Perhaps the most contentious issue in VSL estimation concerns whether the value of a statistical life should vary by age. Should the death of an eighteen-year-old be valued differently from the death of a ninety-year-old? This question involves both economic logic and moral intuitions. From an economic standpoint, there is an argument that younger people have more remaining life years ahead of them, so avoiding their death preserves more years of life. However, this calculation raises profound questions about whether it is fair or moral to place different monetary values on different people’s lives based on age. When researchers have surveyed the general public about this question, responses were roughly evenly divided, with some people firmly believing that all people should be valued equally and others endorsing differential valuations based on remaining life expectancy.

The Political Controversy Surrounding Differential Valuation

Several governments have attempted to implement policies that value lives differently based on age, and the political response has been severe. Citizens and advocacy groups argued vehemently that all people are created equal and that their lives should not be assigned different monetary values based on age. The controversy demonstrates that this is not merely an academic or technical question but a deeply political one involving fundamental questions about equality and human dignity. Public opposition to age-differentiated valuations has been substantial enough to derail or modify several policy initiatives.

Quality-Adjusted Life Years as a Solution

Within the economics profession, quality-adjusted life years (QALYs) has emerged as a preferred framework for thinking about the age question while respecting the value of all lives. QALYs recognize that when assessing mortality risk reduction, we should consider two distinct variables. First is the timing of death: how long does the person live? Second is the quality of life during those years: what is the person’s health status and functional capacity throughout their lifespan? The benefit is conceptualized as the area under the curve of quality of life over the person’s remaining lifespan, somewhat analogous to consumer surplus but applied to health outcomes.

QALYs allow for comparison of different health interventions with different impacts on longevity and quality of life. Someone might die early but experience good health for a period before a sudden decline due to exposure to a hazardous substance like mercury. Their quality-adjusted life years would be calculated as the area under that curve. Someone else might live longer but experience declining health throughout their lifespan. The two individuals might have different total QALYs even if one lived longer in years. A policy that increases safety by reducing the probability of death will appear as an increase in quality-adjusted life years simply because the person shifts from early death to later death, extending the period of life. Alternatively, an intervention like improving indoor air quality might increase quality-adjusted life years even without extending life span, because it improves the quality of the years the person does live. QALYs thus provide a framework that values both mortality and morbidity improvements consistently and avoids the moral objection to assigning different values to different people based solely on age.

Historical Perspectives on Risk and Occupational Hazards

Mercury and the Mad Hatters

Historical evidence provides powerful illustrations of the harsh consequences of occupational exposure to toxic substances. In earlier centuries, hat makers suffered from chronic mercury poisoning. Mercury was used in the process of blocking hats to achieve the desired shape and finish. Workers in this industry were routinely exposed to mercury dust and vapor without protection or awareness of the dangers. The mercury accumulated in their bodies and caused severe neurological damage, leading to the expression “mad as a hatter” to describe the symptoms of mercury poisoning: erratic behavior, violent mood swings, cognitive decline, and other mental symptoms.

Chimney Sweeps: The Ultimate Occupational Hazard

Perhaps the most horrifying historical example concerns chimney sweeps in Victorian England. Because of the small sizes of chimneys and their interior dimensions, the most efficient workers for this task were children. Homeless and poor children were employed as chimney sweeps, forced to climb inside narrow chimneys caked with coal soot to scrape out the accumulated deposits. The expected lifespan of a chimney sweep child was seven years. These children worked in conditions of constant exposure to coal dust, unable to breathe properly, spending days cleaning multiple chimneys. The combination of injuries from climbing in confined spaces, respiratory damage from coal dust exposure, and poor living conditions meant that these children rarely survived to adulthood. This represents perhaps the most extreme example of workers accepting extraordinarily dangerous and unhealthy conditions because of overwhelming poverty and lack of alternatives. The moral dimensions of occupational risk and valuation become starkly clear when considering how desperately poor children were exploited in dangerous conditions with minimal compensation and anticipated early death.

Conclusion: Summary of Key Concepts

The value of a statistical life represents a powerful tool for economic policy analysis and decision-making. Derived primarily from labor market analysis of wage-risk trade-offs, VSL estimates allow policymakers to quantify the benefits of policies that reduce mortality risk in monetary terms. The classroom abalone fishing game demonstrates the core concept: workers require higher compensation to accept greater risks, and this wage differential combined with risk probability yields a value for a statistical life. These estimates have found extensive application in air quality policy and other regulatory contexts where mortality risk reduction is a central consideration. However, significant limitations and ethical concerns surround the VSL methodology. Assumptions about perfect information, representative risk preferences, uncompensated externalities, and appropriate age-differentiation all deserve careful scrutiny. Quality-adjusted life years provides a framework that addresses several of these concerns by incorporating both mortality and morbidity into a comprehensive measure of health improvements. Understanding both the strengths and limitations of VSL represents an essential component of competence in environmental economics and policy analysis.

Transcript (Day 1)

Alright, well, let’s get started. Today, we’re going to talk about valuation of ecosystem services. We’ve been spending a lot of time on the concept of ecosystem services, but so far we’ve really focused on the estimation of the production function side of things, right? That’s where we spent a bunch of time using the InVEST toolkit to establish where specific ecosystem service values are generated—very spatialized information. We did some QGIS work. But the whole time I’ve been talking about the value proposition of ecosystem services to conservation, is that it puts a dollar value on ecosystem services.

In a lot of what we’ve done so far, we really haven’t focused on that monetization component. So today we’re going to focus on that and talk about how you could take the InVEST outputs you have that are biophysical and put a specific dollar value on them.

First off, this slide here might look familiar. What kind of data do you think this is, thinking back to what we’ve been using in QGIS? What does this look like? We’ve used it in the sediment retention model. What do you think this is mapping?

It is a DEM. These are kind of beautiful, just looking at the variation—almost looks organic, even though it’s not. Can anybody figure out where this is?

That’s hard, but look up in this area here. We’re right here, or maybe more like here. So this is the Mississippi. This is that bend in the Mississippi where the Minnesota and Mississippi rivers join. The St. Paul campus is right about here, but then you can see it flow. If I zoomed out a little bit more, it would get a whole lot easier to see where this is because you’d actually see the coastlines and stuff.

But I’ve spent a lot of time in this geospatial world, and you start to kind of see things this way. I kind of see elevation as one of the key inputs.

Okay, so the agenda for today: we’re going to be switching to the valuation component. We’re really going to have two different sub-themes. First, we’re going to start talking about what are the different types of economic value—it’s frankly present in all sorts of parts of economics, but we’re going to emphasize the parts that are relevant to ecosystem services or even outside of ecosystem services—just the more general task of putting a dollar value on nature, because this is what goes into our cost-benefit analyses. Then we’re going to switch over to talking about the methods we might use to put that specific value on ecosystem services. So it’s types of value and then methods for establishing that value.

First, this is going to be the framework. We’ll return to this throughout and I’ll fill it out as we go. But this is a taxonomy of the different types of total economic value—sometimes indicated as the TEV. We’re going to slowly build up to this whole diagram, but let’s start with some of the sub-components. The first one is going to be on the use side of things—use value.

Use value is one half of the total economic value taxonomy, and we’re going to start with the easy one: direct value.

Direct value is the easiest to think about. It’s where we are directly consuming something in nature. I’ve embedded a link here—shout out to this YouTube channel, which does a really great job of talking about this. The direct use components of ecosystem services are, number one, things that have market value. A fish, for instance, we can definitely buy from the store, or roots that you can maybe forage for in a forest or just simply buy directly. Others, like hunting, certainly have a market value insofar as people buy permits for it, but sometimes the value in the permit is less than the total value people would be willing to pay. Timber is real easy. It’s worth noting that direct use doesn’t mean it actually gets used up.

Thinking back to our discussion of public goods versus private goods and consumptive versus non-consumptive goods, there are all sorts of things that have some degree of non-rivalness. Me walking down a trail, at least at first if there aren’t too many people, doesn’t consume it. It doesn’t use it up, or at least it doesn’t seem to vary much. I suppose if enough people walk down it, you start to get erosion. Or if you have too many people going at the same time, it does start to be rival. So all those things we talked about—the rivalness versus the non-rivalness of public goods—would be relevant here. But just emphasizing the ecosystem service value, it could be either of these types.

Ecosystem services, just like rival versus non-rival goods, would have different characteristics. You would compete to be the first fisher to get the fish out of the lake right as the fishing season opens. So you would consume these, but you don’t have to worry about that with a trail unless it gets congested.

Where ecosystem services really starts to matter is with indirect value. This is present in lots of parts of the economy but is especially important for ecosystem services because lots of the ways that nature provides value to us is not through this direct use component. A couple of examples here are with respect to water. Water, which we saw on the last slide, is a direct use—you can directly drink water. But there are other parts of the ecosystem that contribute to that direct use.

A couple of examples: a wetland might help clean the water, which is then directly used. We’d put a dollar value on this, but with respect to the drinking water component, it’s indirect in so far as it makes something else more valuable. Another example would be the vegetation and root structure of forests. We know that forests increase water filtration. Just like with the sediment retention model, it holds water there rather than letting it run off across the landscape, and this allows it to seep in, which ultimately increases the amount of water available to the river over time. This would be something that contributes to the direct use of water, but the valuation of the forest itself would be indirect use because we don’t consume the forest in this particular case.

One thing to note is that the use value side—both direct and indirect—are all going to be things that have an observable market value. The purchasing of fish in a grocery store gives us information about how much people actually value that. The price is the perfect indicator of how valuable something is in an economy. For all the reasons we talked about before, price reflects a situation where the market often finds itself, and that price is very accurate for the aggregate measure of how much people care about it. Lots of these things, whether direct or indirect use, will often have a large component that can be observed directly from the market. What’s nice is that the method for putting monetary numbers on ecosystem services can rely on those market values, so that’s quite straightforward.

We’re now going to talk about something that is kind of in between the two: option value.

If you ever take a finance course, you learn a lot about option value, which represents the fact that instead of just paying for something you want to consume right now—like buying fish and then immediately eating it—option value is that people will put a positive willingness to pay, WTP, to preserve an option for future use.

Why do I say finance? You might have heard of options traders. That’s one way you can get yourself into trouble in finance—there are complex contracts you can sign that say things like, if the price of this stock falls below a certain price, then people will buy it. They’re putting an actual contract in place that they have the option to do something if something happens. This is one of the things that actually caused the 2008 financial crisis, but that’s a whole different class.

For ecosystem services, this is just saying that we might put a positive value on avoiding something, especially if it has uncertainty or the possibility of degradation that is irreversible.

Just like with the financial option, many people would be willing to pay a premium today for the right to have the benefits from the ecosystem later on. A classic case of this is the value of genetic resources in biodiversity-rich ecosystems. There’s all sorts of direct value that people get from taking the genetic information from various species we discover and figuring out how they might be useful for making drugs. This is one of the major sources of drug discovery. There’s a paper that talks about this, where it made a whole lot of sense to pay people to not cause degradation in these biodiversity-rich ecosystems, just because that might trigger irreversible actions that would have value we might use later on. Of course, you don’t know exactly what the value is because you have to do the research and development to understand it, but there is direct value in preserving the possibility that it’s there.

One of the interesting things about this is that it provides an economic rationale for being careful or precautionary. Even when current benefits seem low, option value might make it so that when we consider the preservation of that option for later, the value goes greatly up.

I’m going to put this one with a dotted line because this one is going to straddle between market values and non-market values, which we will now turn to next.

So, referring back to our original slide, these were the use values. You might have guessed that the second type is non-use value. These are going to be things that typically do not have market valuation possible. We’re going to split this into two specific types of non-use value. First, we’re going to have existence value.

Existence value represents the fact that we might be willing to pay for a resource just to continue existing, even with no intention of using it. It sounds similar to option value, but option value is about preserving the option of using it, whereas existence value is purely non-use. An example would be people who will never visit the Amazon, and yet they still report that they would be willing to pay for conservation actions that preserve the Amazon.

From the ultra-rational economist’s perspective, this makes absolutely no sense. How can you be willing to put value on something and even pay for its preservation if you’re never going to be the one who gets to consume it? Well, lo and behold, billions of dollars flow to organizations like the Nature Conservancy, where they’re doing exactly this—taking money from individuals who are paying to protect nature they have heard about but will probably never go to.

The Amazon, for sure, will never see me there. I am terrified of going to a place like that with gigantic bugs and huge snakes. The closest I got was when I was in Sri Lanka for a summer and we went to a national park. Even though they’re a lot less developed, we showed up at the main office and it was clear nobody had been there for weeks and weeks. We totally startled the forest guy keeping track of the national park. He’s like, you want to buy a ticket here? He was kind of surprised, so he dusted off these official government tickets and gave them to us.

We went out on this trail and the whole thing was absolutely terrifying. I made it around the first turn and there’s this log across the way. We went to step over it, except the whole log was moving. There were tons of insects everywhere—millipedes and scary things. So I put a positive value on the Amazon, and this is definitely a non-use value. I do not want to go there. We immediately turned around. This was not the type of national park we think of, where it’s very well-manicured like a lot of U.S. national parks are.

That’s an example of people who would pay to protect it even if they don’t benefit from it directly.

Another example would be charismatic species. A lot of money flows into environmental protection because of a handful of cute, cuddly, or sometimes ferocious-looking animals. The panda, the red panda, Bengal tigers—these are ones that are very charismatic. We probably care about the whole ecosystem, but something about our human value system is willing to pay more money for something that is cute, that has features that look a lot like us and look fuzzy. We seem to be willing to pay for it, which is a lot more possible to drive donations than trying to raise money for cockroaches or something like that. This is a big part of the flows of conservation funds.

But even existence value doesn’t quite get the whole picture of non-use values. When you drive into what creates value for people, there’s also a willingness to preserve things for future generations. Hence the word bequest.

This one is especially important as you get older and start to think about what sort of world you’re going to give to your children, grandchildren, or their grandchildren. This is motivated by intergenerational equity. It’s even further removed from rational economics. Existence value takes a big step away because you’re never going to consume it, but you’d still put a dollar value on it. Well, here, it’s not that you aren’t going to yourself get benefit from it, but bequest value captures that component of wanting other people to have the option of valuing this. We’re not talking about our own future uses but other people’s future uses.

One great thing is that if you think about standard economics and how simplified that could be, it would be all use value, all direct value, and all as measured through market values. This provides a much richer framework for understanding how people actually do make decisions. I’d argue this is probably true way beyond just ecosystem service values or other things. Probably a lot of the things in the market are also like this, but economics still has this bias to just the one line.

And so that’s the many different types of value. The heterogeneity among these different ways that we value things means that the methods we have to use to establish all these different reasons we might care about or put value on nature, we’re going to have to use a wide variety of methods to do that. That’s the second key thing we’re talking about today.

We’re now going to talk about different ecosystem services, which will have different components of value. That means we’ll have to explore a wide variety of different methods for establishing what that value is. So instead of types, we’re now moving to methods.

What are these methods?

I could align this with the previous graphic. At the bottom of the previous one, we talked about market valuation on the left and non-market valuation on the right. This is similar but moves down to more specific methods.

For market valuation, we’ll start with replacement cost.

What’s replacement cost? There’s interesting research on wild pollinators. It turns out there are research and development funds being put into using tiny little drones that can actually replicate pollination services. They’re given basic artificial intelligence to navigate around to different species, basically bump into them, and have little feelers that will collect the pollen and travel to other flowers or plants that need that pollination.

That sounds expensive, right? Well, that would be very expensive as a replacement for pollination services that we get for free. Here’s the point: it’s not going to have a market value that’s very easily observed, but it leads to the production of crops which do have a market value. If people are willing to spend actual money on these drones, then the value of the natural thing that we got for free does have a value. Put differently, however much you’d be willing to pay for a replacement is a good estimate of the environmental value of the thing lost.

A classic example is with wetland values. If you lose a wetland, any municipal engineer will immediately know that you should be considering what alternative you’re going to have for water filtration and storage. My parents are members of a church that was struggling financially, but a big condo wanted to build right next to them. One of the requirements is that environmental engineers said this condo is going to pave a lot of natural land, so they have equations they use for how much water filtration and storage is going to need to be installed. The church got an easement on their property and was paid $660,000 just for the ability to convert a field into a retaining pond. You see those all over, especially in the suburbs where there’s lots of space and no underground piping. The solution is to build these retaining ponds that essentially hold the influx of water after a big storm but also let it slowly percolate into the groundwater.

That $660,000 was a good estimate for what was the wetland value of keeping that land natural. So one way of thinking about it is that as we degrade nature, there are literally engineering and expensive solutions that we have to pay to get a replacement for the thing we were getting for free from the wetland.

This logic is pretty strong because it even makes it into municipal budget planning discussions. But it can break down. It doesn’t work in all cases. Number one, there’s not always a replacement. The drones for bees one might turn out not to be a very good replacement; it just doesn’t seem like it will work very well. But other things have simply no replacement, and so this will underestimate the valuation of the thing in that case.

The second method is avoided cost.

It sounds similar but is different. Avoided cost values an environmental good or ecosystem good by how much costs on society it prevents from happening in the first place. These red bars are indicating how much cost happens from flooding damages. Hurricanes are very expensive. Hurricane Sandy holds the current record at $61 billion of damage. It’s very easy to observe that damage because it’s buildings flooded, buildings knocked down—people paying for it. There’s no doubt there’s value and costs associated with the event.

But if it happens to be the case, and it often does, that nature provides a way to avoid those costs, we should be putting a dollar value on that. An example here is mangroves. Mangroves are considered coastal armor because they grow right on the edge of saltwater. Anybody ever been to mangroves? They’re so cool. I was on a trip where I got to go snorkeling underneath. The water was only about this deep, but the roots would arc over like tunnels. The fish were just dense there, and it was like a maze.

The root structure is literally armor. When you have a big hurricane or storm event, this will drastically reduce the amount of storm surge, waves, or other flooding drivers that make property just adjacent to it destroyed. There are many other types of ecosystems that do this besides mangroves—coral reefs have a huge protective value, seagrasses, lots of things do. But the basic idea is that if we didn’t have them, the amount of damages incurred by those buildings offshore would be much higher.

This is similar to replacement costs but different in so far as we’re avoiding some damage rather than having to replace it with a substitute. There’s a little bit of overlap because it’s so expensive having hurricanes hit that mangroves will be estimated not just by the costs but also by the replacement value. They’re literally putting up seawalls, and the amount of expenditure going into this globally is rising over time as oceans themselves rise and we have more extreme storms. We’re paying a lot of money for really expensive solutions. Therefore, the mangroves would probably be able to have multiple ways to put a dollar value on it—the replacement costs and the avoided costs.

Common applications of avoided cost include flood protection, air filtration, and pollination. This works really well when the avoided costs are well defined, but it can get very hard when the damages are diffuse and spread over all of society. The case of hurricanes is easy to see because whoever owns that building, it’s pretty obvious they were the ones damaged. But what about things like climate change through slightly increased temperatures that affect labor forces all throughout society? It’s a little bit harder to see the direct damages, and it’s hard because it’s diffuse among the whole population.

Some of the work we’ve done in Minnesota—here are a couple of my close collaborators—is to think about avoided costs of water treatment. We’re a very agricultural state, so we have a lot of nutrient runoff. One of the many ways this is a problem is when nutrients get into the groundwater. Those folks who have wells pulling from that groundwater are now having polluted water come up, which causes all sorts of health problems, one of which is blue baby syndrome.

As you increase nitrogen in these wells, people have direct damages that can be measured by the behavior of people who own wells and what they spend money on to try to mitigate this damage. A study by Bonnie Keeler and Steve Pulaski from this department looked at specific costs paid by specific well owners. The spatial data shows existing wells, and the color indicates the level of nitrate pollution. They also polled those people on what things they would do or have done to mitigate this damage: reverse osmosis with filtration through a membrane, distillation by evaporating it with heat, anion exchange with chemicals, or simply building a new well. People do this because you simply can’t use these wells once they get too polluted.

These are real values that if nature had been able to filtrate the water effectively before it made it into the wells, these would be dollar values that these well owners would not have had to pay. They also did an interesting analysis where they modeled what would happen if agriculture expanded into the future. That would mean more agricultural fields and more nutrients being put onto the landscape. They modeled what the actual number of wells would be that would surpass the dangerous threshold under this scenario of agricultural expansion, and they saw that the dollar values just exploded. This is a good example of where you can’t just look at the actual present costs but where it might go under scenarios into the future.

So that’s the market valuation side. Now let’s switch to the non-market side.

These are market-based because we are observing something in the market—we saw a person who had to pay the replacement cost or we saw damages actually happen or could be avoided. But in many cases, it’s hard to do that or there is no observable market transaction, so you have to get a little bit more clever. These are a lot of fun.

The first one is hedonic analysis.

This is something that exists outside of environmental economics. Hedonic comes from the root word hedonism—how do we get direct pleasure from things? It initially came about to try to understand how people care about things like houses—how much money is an extra bathroom worth?

Hedonic analysis is a bunch of statistical techniques to analyze different transactions on houses of different qualities, like 3 bathrooms versus 4, and make a predictive model for how much higher the price would be with an extra bathroom. But in the domain of environmental values, what’s really cool is this technique can use the same datasets—the transactions of all housing sales in a metro area—and look at how houses near environmental amenities sell for more.

This is a useful way because the difference in price between, say, a home with a beautiful environment around it versus a home in an urban concrete area—that price difference is another way of putting a dollar value on how much people care about nature.

Who can think of a problem with this approach? What also might be true of houses next to a really beautiful park versus houses in an urban concrete area? Rich people tend to live near nature. We’re all thinking it, and that’s the case. Rich people tend to have houses in these nice areas. That’s really just a reflection on the fact that nature is valuable. But it also is true that rich people tend to live in areas with lots of green space and access to parks.

This is why this is a deeply statistical approach where you use regression analysis, which essentially means let’s try to isolate the environmental attribute, like park proximity, from all the other housing characteristics. If we ignore other housing characteristics like the size of the house, then we’re going to be obviously misestimating how valuable that environmental amenity is. But there are tons of examples where we have enough data to do this well.

One fun example I like is “a loon on every lake.” This was a hedonic analysis of lake water quality in the Adirondacks. They collected a boatload of data, including the number of loons present on that lake in the year of the sale and other characteristics of loons. They did a bunch of other things, many of which were environmental like the acidity of the water, how close the house was to that water, and the size of the lake. They also had a whole lot of data on the attributes of the house like number of rooms, building age, building age squared—because maybe it has a nonlinear relationship—and the square footage of the house.

If you combine all of these things, even ignoring the environment, you can make a very accurate prediction of what the sale price of a house would be. If you look at thousands and thousands of transactions, you can get very accurate predictions.

What they were interested in is not just predicting the price of the house but rather isolating out the effect of the presence of loons. The results show that if you had an 11% increase in loons present in the year of sale, the mean property value impact was $21,803. People love loons. People will pay a lot more money for a cabin on a lake that is pretty enough and wild enough to support loon populations.

You might want to break that down—maybe the loon is just an indicator species. They also looked at many other things. Maybe it’s not just the loons but other bundles of environmental goods like better fishing. Nonetheless, you can use statistical analysis as long as you account for all the non-environmental things like the size of the cabin and distance from the city. The remaining price markup is a good estimate of how valuable nature is.

That’s hedonic pricing or hedonic analysis.

Another one is travel cost.

Our goal here is to understand how much people value these things that are hard to get a dollar value on. Often, people will spend money on things that are necessary to be able to consume the environmental good. The basic idea of travel analysis or travel cost analysis is that the travel costs—the amount you spend to be able to make it to the park or whatever environmental amenity you’re going to—is at least a good estimate of your willingness to pay. It’s certainly a minimum estimate, because you might get a bunch of value on top of however much you spent traveling there. That’s the whole point of going on a vacation. But at least it’s better than zero. If people spend $1,000 to be able to make it to a lake, that is at least $1,000 of value they attribute to going to the lake.

The cool thing about this is you can use that information. The fact that visitors come from all sorts of different locations either closer to or further from the park means we can actually look at how likely they are to make a visit to that park and compare it with the distance they had to travel, which is essentially the cost. We can get a full demand curve. This is a demand curve for that park where we have price on the vertical axis and the quantity of visits to that park on the horizontal axis.

One of the reasons this one has generated a lot of interest is because of the data. There’s really fun data you can use. This is research I was involved in at the Institute on the Environment down the hall. We estimated something called FPUDS, which is Flickr Photo User Days.

We found a really rich dataset that was essentially geo-marked photos. If you look on your phone, it often has a specific latitude and longitude of where your picture was. If that’s uploaded to some database like Flickr, we can use that data.

What we can do is make an inference that if lots and lots of photos are taken within a certain park, that’s a good indicator that a lot of people actually traveled there. We can combine that to create an indicator of, in these different parks in northern Minnesota, how many photos per day on average were there. Then cross-reference this with a travel network showing what are the distances in roads it would take to get there to estimate the travel cost of actually arriving there.

If anybody’s ever done the Boundary Waters, it’s very costly to get there, right? It’s quite a distance away, whereas things much closer are quite a bit cheaper. This was modified by other data like what the actual entrance fees are, so everything close to the city wasn’t necessarily the cheapest. They just had the cheapest travel component, but there would then be costs of using it.

The idea here—instead of thinking about houses, if we collected a bunch of data on things like lake size, lake clarity, depth, and all the other elements like whether or not it was in the Boundary Waters canoe area, whether or not it was considered a state park, and critically, whether or not it had invasive species present—we could take those variables and combine them in a statistical model with the travel cost to see how much these variables like invasive species would reduce how much people were willing to travel to those given parks.

The basic point is that invasive species, like zebra mussels, will dramatically decrease the quality of the lake. There’ll be very little fish. These are an invasive species that essentially filter out all the nutrients, and they’re also very sharp so you can’t step on them. We can actually get data to show how much less travel happens to these places. If we also know the amount of money that people would have spent on that travel, we can get a lower bound estimate of how valuable that lake is and how much value that lake lost by having invasive species come in.

This is a little different than the house example but uses the same basic idea. As long as we can get costs and do a statistical analysis of how much it mattered to people in their decision-making, that becomes useful information for putting a price tag on nature.

We might have to save the last method for the next class, but we should touch on contingent valuation.

A question that really motivated this was a bunch of oil spills and other disasters. A lot of people were upset by things like the Exxon Valdez oil spill, very famous in the history of environmental protection, because it creates all sorts of ecological damage.

If you were to ask Americans at the time, how much would you be willing to pay to avoid an oil spill, and simply added that up, this is sort of a brute force method with all sorts of challenges. But there are methods that let us figure out what is the overall amount that people would have been willing to pay to prevent, for instance, another oil spill.

They did a huge study that found that on average each American would have been willing to spend $31 to prevent another oil spill similar to the Exxon Valdez oil spill. When you multiply that by the population of the United States, that gave a dollar value of $2.8 billion of damages that people in aggregate would have felt.

What’s useful is that these numbers can be used in court. They actually were. Lots of the lawsuits that came after, specifically the Exxon Valdez oil spill but many other environmental disasters, will do a contingent valuation. I’ll save the methods for this for next class, but I want to get to the main point that in aggregate, we can identify how much people were damaged. This can be used in court, and Exxon Valdez actually needed to make these payments. These payments could either go directly to individuals or toward trying to restore the quality to where it was.

More recently, we had the Deepwater Horizon oil spill put out 200 million gallons in 2010. That’s the one where it dramatically blew up. This exact same approach was used there.

Just to close out for today, remember why we spent a lot of time showing where ecosystem services are provided. Most of those were biophysical indicators, essentially the quantity of ecosystem service. But now we’re talking about a big grab bag of different methods where we actually can put a price on it. I didn’t emphasize that in the ecosystem service models we ran, but many of them have options for putting different price tags from these different methods onto those ecosystem service goods.

We’ll leave it there. Have a good Monday. I got the day right this time. We will pick up with contingent valuation next lecture. I also have quizzes to hand back. The scores have been online, but if you’d like to see the actual quiz results from Micro Quiz 3, hang around and I’ll call out all the names—Denton, Alex, Kellen, Rhea, Griffin. I think I left the rest in the office. More next time. Thanks.

Transcript (Day 2)

Alright, let’s get started, everybody. Thanks also for the flexibility on the room change. We will be back in our normally scheduled room for the rest of the semester. This is just the one conflict that we had, and like I said, we’ll end 5 minutes early so the other group setting up their event won’t be delayed. We’re going to pick up on the slides we left off with last class on valuation, and I put together an “Agenda for Day 2” slide to show where we’ll be going.

First, I’ll talk about some updates on what’s coming up next week and where I’ll be, as well as details about what our guest lecturers will be discussing. Then I’ll present the specific and finalized details of the final project. I had a lot of fun working on this over the weekend, but then I got sick—a kidney infection, as it turns out. I’m not contagious, just on antibiotics. I got up to 102.9 degrees, so it wasn’t great. But on the positive side, when it’s just a kidney infection, you recover quickly. It’s not like a lingering flu virus or coronavirus.

I’m feeling okay now. Anyway, I got all excited about the project because we’ve been building towards this. A lot of the skills you’ve been learning—you might be wondering how we assess something like cutting-edge research on earth economy modeling. You can’t do a problem set or a writing assignment very easily, but what I want to do is have this final project be a showcase of all these different lectures. We haven’t had a midterm or anything, and with that in mind, I want to propose an alternative reshuffling of the course and its grading structure. I want to be fair to those who want the syllabus to stay exactly as it is, but I’m guessing many people will be in favor of this proposed change.

We’ll quickly finish up on valuation methods, and then turn back to InVEST one last time for the semester to talk about the water yield model. I’ve sequenced it this way because now we’ve talked about valuation and the provision of ecosystem services, so we’ll tie those together in this last valuation component of an ecosystem service model.

So next week, I’ll teach on Monday, but then straight from class, I will drive as fast as I can to the airport. I have a 90% chance of making it onto my flight, but it’ll be really close and depends on TSA, which depends on a bunch of other things. We will have class as scheduled, but on Wednesday, Colleen Miller, who is our Senior Biodiversity Scientist at NatCap, will come in and talk about how biodiversity is the basis of all ecosystem services. Some people often have this lecture before even getting into ecosystem services, because ultimately, it’s this rich and complex web of life that makes up ecosystems. That’s the reason nature provides value to us. We’re going to have it now as a retrospective of what’s the foundation of these ecosystem services we’ve been looking at.

On Friday, we’ll have Distinguished McKnight Professor Carlisle Ford Rungi, who some of you may have had before in class. He’ll be talking about land and the history of economic analysis—or exclusion thereof—of how land affects the economy. One key point is that in the history of economics, original economists really cared about land. But something happened in the 1950s, ’60s, and ’70s where they decided land is just identical to any other type of capital. They decided to have their models only look at labor and capital and ignore land. That’s been really to the detriment of understanding environmental economics.

As for where I’ll be, I’m reusing slides from a previous lecture because it’s literally the same place. I’m going to the Chilean Central Bank, and all these things we’re talking about—earth economy modeling—this is literally what I’ll be presenting to them. I’ll be giving a keynote address at a big conference with representatives from many different central banks who want to implement earth economy modeling. It’s kind of cool that the stuff we’re learning, I’ll probably use some slides from what we’re doing in this class for those central bankers. The big difference is that instead of college students, we’re going to have a bunch of people with a lot of money making a lot of important decisions on environmental protection listening to this content. I’m kind of psyched about that.

So any questions on next week or logistics? Good.

Now I want to switch to presenting the final project. I’ve updated the website with the final project link. I want to walk through this briefly. Basically, I’ve been indicating the direction, but I haven’t given the official details of the assignment. Now I’ve pulled that together.

The idea is that you’re going to imagine you’ve been asked by a senior policymaker in your assigned country to prepare a briefing document. For me, what I’ll be doing next week is exactly that, except for a central bank policymaker in Chile. They want to understand what earth economy interactions their country will face over the coming decades and what should they do about them.

Central banks have long been analyzing climate change and are very worried about systemic risks to their country. Their mandate is to maintain a stable economy and currency. But increasingly, they’re thinking it’s not just climate change—it’s climate change and nature. There’s growing interest from central banks to ensure their country is resilient to both climate change and possible nature crises. So your job is to write a briefing for that hypothetical case.

Your report will address key themes like market failure, sustainability, climate change, land use change, ecosystem services, future scenarios, and basically all the stuff we’ve talked about in this class. There will be two components to the grading. There will be a 5-minute lightning talk where on the last day of class, everybody will present. But it’s just 5 minutes, so that’s about two to four slides. There’s more detail on the report itself.

The key deadlines are: the rough draft is due the second-to-last day of class, you’ll present the slides version on the last day of class, and then the final report is due on the final exam date.

Here’s where I have a proposal. In the initial syllabus, I proposed that we have both a final report and a final exam, similar to the midterm, on the final exam day. To be honest, this type of material isn’t really good for an exam. Microeconomics is fine for an exam, but when we get to spatial analysis, policy thinking, and sustainability, essay questions don’t work very well. What’s more fun is what we’re actually going to be spending our time on: the final project.

So I would propose to the class that this final project—because I think it’s so cool and it’s definitely not a small amount of work—replace the points that would have gone on the final exam, and we don’t have a final exam. You won’t have to sit down on May 12th and write essay questions by hand, which is the only way to do it in the era of ChatGPT. This would allow you to focus more time on creating a quality report.

No final exam—instead, the final exam date, May 12th, will be when the final project is due. May 1st will be the rough draft for the final project, and the real final project will be due on the final exam day. You can also use any feedback you get from your presentation to improve your final report. There’s a lot more information, including a rubric that specifies what you do for these different steps.

The advantage of having the final exam go away and the report due in rough draft form is that you have plenty of time to respond to feedback and make appropriate changes, rather than having just a single deadline for the report. You’ll basically extract the slides from the essay, so you can guess the key figures you’ll make—like the map of your country with one or two of the ecosystem services. It’ll be a figure in your report, but then you probably just copy and paste it into your slides.

I’m trying to be really fair, because the syllabus is what we agreed to at the beginning, and I care about that. Changing things midstream could potentially be unfair. But I’m guessing everybody’s in favor of this because I think it’s superior in terms of educational outcomes. Here’s what we’re going to do: I’ll update the website with the new percentages for your final grade, taking the score that would have gone to the final exam and putting it into this project. I’ll post that and send an announcement. Then I’ll give everybody a couple days to anonymously submit any concerns. If there are no concerns, we’ll go forward. If there are concerns, I’ll address them. Alternatively, you could have the choice between doing the final exam or just the final project, but that would be far worse because it would still be the same amount of effort on the final project, and you’d get 15% of your score from an additional exam you’d have to take.

You can see under Part A the report—roughly 2,000 words as a guideline, though I won’t count words. I’ll look at whether you make the key points introduced here: talking about your country and how it fits into planetary donut framing or other framings, talking about the challenges and market failures faced, discussing land use, ecosystem services, natural capital, future scenarios under different SSPs, and ending with policy recommendations and conclusions.

The data you’ll use is stuff we’ve seen throughout the course. I’ve collected key links together here, like the SSP database showing what will happen to GDP in your country under different assumptions. You’ll use real data that analysts use. You’ll also use the country geospatial data I’ve collected for you, which is what we’re going to talk about today and what you’ll use to run InVEST and other analyses.

This is what this whole course has been building to, which is why I’ve been having fun with this. This is the first time I’m teaching this course, and this is essentially not an environment and natural resources course—we’ve had about four lectures on that. All the other stuff is what we’re going to eventually rebrand as “Earth Economy Modeling.” This is the frontier of where this type of analysis is going.

For the rest of today, for the next 25 minutes before we end 5 minutes early, we’re going to finish up on the valuation component and dive into the InVEST water yield model.

Just picking up where we left off, I talked about one of the last methods: contingent valuation. We discussed how that was used for big environmental disasters like oil spills. It’s been really important to environmental economics discourse because the dollar values assigned—these billions of dollars of damages to the environment—are actually what corporations have to pay, at least if they lose the lawsuits.

These work through court cases. Some body of people sues an oil company and says you’ve caused us harm. It’s a very standard lawsuit—like if somebody crashed into your car and refused to pay, you’d have a civil claim against them for damages. The difference is it’s much harder to put a dollar value on the damages. In a car case, you go to a mechanic and ask how much it would cost to fix it. The court case would be about whose mechanic was right. The assessment of whether the car is fixed is pretty clear, so there’s usually not a lot of variation.

It’s exactly the same with environmental cases. You have experts—scientists instead of mechanics—propose how much it will cost to fix this. In an oil spill case, some costs are easy to identify: the amount of money spent on containment boats, fuel, the captain’s salary. But there are all sorts of other values that don’t have an easy dollar value. What’s the value of all the birds that were lost? The fact is, there may be damages that experts can identify but that don’t have a market value easily identified.

When experts go to court, they use methods specifically like contingent valuation to show what the average person would pay for preventing that environmental harm.

The general approach is to ask people, in the lab or in the field, how much they care about this. Experts will mail physical letters to randomly chosen folks, asking how much they would pay. This is problematic because you can make up whatever number you want. You don’t actually have to pay it.

Some better approaches use real money. A key example was a contingent valuation approach for hunting access. Hunting permit applicants were told they could have their free hunting permit or a $100 gift card, but only one. Hunters who really love hunting presumably wanted their license, and we have good evidence they valued it more than $100. The researchers sent out different gift card values to figure out how much hunting was worth. This is clever because it used real money, not just a fill-in-the-blank with a made-up dollar value.

The original study actually sent a check and checked whether it was cashed. If it was cashed, they canceled the hunting permit. But the point is, there are many different ways to ask people and get a dollar value that, if done well, is hopefully usable in court to establish damages.

Problems with contingent valuation arise when you don’t use payment card methods. People see these surveys, and if they’re clever enough to know it’s probably for environmental valuation and they really like the environment, they’ll say a really high value because they don’t have to pay. There’s very good evidence in the literature that respondents claim two to three times higher willingness to pay than studies using real money. So people do game the system.

That’s contingent valuation. It’s tightly related to choice experiments, which derive from a similar idea but consider a more complex set of choice options. A typical experiment might be aimed at eliciting what ecosystem you really prefer. You could give a choice between an ecosystem that’s a park with benches and garbage cans and development that changes species distribution—maybe mice come in around the garbage cans while songbirds leave—versus another option with a path but no infrastructure and a different set of animals.

This method asks people which bundle of goods they’d prefer. There was a real DNR study trying to figure out the value of lake clarity under different configurations. They had lots of different scenarios with things like lake color, boat launch availability, and distance from your home. By polling enough people, you can create a statistical model that predicts which lake people would choose. This is useful for eliciting value on a bundle of different ecosystem amenities. A clear lake is good, but it depends on whether you’re a fisher person who likes the fish or a wakeboarder who likes clean water.

Then the last method is benefits transfer, sometimes referred to as the dark arts. It’s called that because most academics find this method very problematic. However, it’s the easiest to implement, so high-paid consultants use it when the government pays them to put a dollar value on nature.

The benefits transfer approach is fraught. Remember how we talked about the Costanza estimation of how valuable all of nature was, and I was very critical? They found nature was worth $33 trillion. What they did was one gigantic benefits transfer, transferring studies on how much a particular parcel of land is worth to all such hectares of land on Earth with that same type.

Benefits transfer uses existing studies where somebody did a good job assessing ecosystem service value through any of the other methods we discussed. They do a literature review, extract the willingness to pay from those studies, and ideally construct a function describing all studies, mapping how the value depends on something like lake clarity. Then you take that willingness to pay and transfer it to all hectares of that land type.

In the Costanza case, they had studies on how valuable ocean water access was for fishing and recreation, and then assumed every hectare of ocean had that same value. Can anybody think of a specific reason this wouldn’t be a good method for establishing the total ocean value? The studies they based this on were heavily biased towards extremely high-value areas—Miami’s real estate amenity value is probably much higher than the Congo coastline and much, much higher than a random hectare in the deep sea with no beaches or surfing. The problem is it’s hard to get a correct value that scales with spatial heterogeneity. The Costanza numbers were heavily criticized for essentially taking the value of Miami beachfront as the value of every single part of the entire ocean, which doesn’t seem quite right.

There are legitimate applications of this method when done less egregiously. But when ecosystem services start becoming valuable to governments, consultants making money off this science crop up and essentially make the science bad because they’ll do whatever is easiest and easiest to sell to their clients. That’s my personal take.

I want to tie this back to Minnesota very briefly. We may delay the water yield model until next lecture, but I want to give a shout out to the reality on the ground. We’ve been talking about ecosystem service models showing that tools like InVEST make it easy to estimate value. In reality, when you look at specific cases, it gets much more complicated. It’s hard to put an adequate dollar value on something when there are many different types of users.

There was a really good study by my former employer, Bonnie Keeler, now director of the Water Research Institute, where they looked at this stuff closely. They tried to figure out what you need to consider when tracking why people might value water quality. They argued you need a valuation approach sensitive to different actions affecting water quality, one that identifies different use endpoints—how people actually use the water—and recognizes that there are unique groups of beneficiaries all differently affected by environmental changes.

They presented a way of thinking about how you’re going to assess some set of actions affecting water quality and see how, under different action sets, you have changing quality. Research links those two. Then, once you get a change in quality, you identify the change in ecosystem services. But critically, as a last step, you have to think about the change in value specific to different benefit groups.

They mapped this out with the specific case of Minnesota lakes. What are the actions that might happen? They identified primary and secondary drivers. For example, with nitrogen from applying fertilizer, you have an action causing more nitrogen in water, which affects water clarity through algal blooms and secondary effects on fish abundance and pest abundance. They continued with all different actions and drivers like sediment, temperature, or toxins.

These have different effects on different parts of the value change. They identified specific Minnesota lake ecosystem services: lake and river fishing, swimming, boating, trout angling, nature viewing, navigation, hydropower, commercial fishing, and safe drinking water. These are things people spend a lot of money on. You then have to get from that physical change—like swimming or fishing—to the dollar value associated with it.

They enumerated all the different methods. From this list, you can see references to specific valuation examples for Minnesota lakes: avoided sedimentation through avoided water treatment costs using the avoided costs method, value of swimming, which is harder to estimate but might come from contingent valuation, and value of avoided death or illness through irrigation. We’re going to have a special lecture on how to deal with changes in ecosystem services and their value when they prevent people from dying. There’s a whole additional set of literature there, and that’s a big component.

Now I want to quickly introduce the InVEST annual water yield model. You don’t have to open it up—we’re going to save actually running it for next class. I want to go slow and spend time because this is where we’ll actually look at the data you’ll be using for your final project.

The basic idea is water yield. When we refer to water yield, we’re referring to water that is yielded into the economic system. In this case, we’re talking about a specific use: water that is in a reservoir—the area behind a dam—which is particularly useful because you can pump it to a water treatment plant for drinking or straight to fields for center pivot irrigation.

What factors determine what water is yielded into that reservoir? Obviously, precipitation matters as the key input, but a whole bunch of other stuff matters. You have to think about underground actions and vegetation actions that are important for considering the final thing we care about: yield.

You have to think about the inflow—like with sediment retention, water flowing in from other locations matters. The precipitation with the inflow goes into the underground, and some of that becomes groundwater recharge, which is certainly valuable for anybody with a well. This is why your well doesn’t go dry. But from the perspective of the reservoir, stuff that infiltrates into the ground isn’t available as yielded water.

The second thing everybody knows about is evaporation. Depending on temperature, wind, and exposure, some precipitated water evaporates before it goes below ground. But the critical thing many people forget is the complexity of the ecosystem through transpiration.

Transpiration is the water on the surface or underground that gets sucked up by plants. Plants turn it into sugars and other things that help them grow. But what’s really important is that plants make water not get yielded to the reservoir, so you need to account for what vegetation is there. This matters in some good ways—it can slow down water flow and increase groundwater recharge by slowing water movement. But it also changes timing. Some water will transpire up, which is bad for short-term yield but good because transpired water eventually precipitates again, spreading out the window in which water reaches the system.

We’re going to pause there and pick this up next class when we actually run the water yield model. That’s the basics of what we’ll be computing using geospatial data.

All right, thanks everybody, and have a good weekend.

Transcript (Day 3)

Alright, let us get started. Welcome to Day 3 of our valuation slides. You can refer back to the same PowerPoint slides that we’ve been using. I’ve updated them, so refresh your browser if you’re the sort that always keeps all of your Chrome tabs open.

Today, our agenda is that we are going to dive straight into water yield, using your country’s data, which will also get you up to speed on the final project. Then we’ll spend some time on the value of a statistical life. We’re still in the ecosystem service valuation section because one of the ways that ecosystem services benefit us is they keep us alive.

The extent to which ecosystem services reduce mortality through, for instance, air pollution, is going to be one of the key channels by which value is obtained by us.

A couple of logistical comments: I didn’t hear any objections from anybody on the proposed plan to modify the final exam. The shift now means that the final project is taking place of the final exam. You’ll be able to turn that in early, so if you have travel plans, this solves any travel-related issues. You’ll be presenting it on the last day of class, which is many days before the final exam day. It’s like the 4th or something like that.

I heard no objections, so I’m going to update the syllabus and the scores in Canvas to re-weight the grades accordingly. This will give you more time to focus on what I think is the novel part of this class, rather than midterm-type exams. Any questions on that? Are we good?

Speaking of those reports, I keep referencing this, but as a Midwesterner, I don’t like to gloat, but I’m going to anyway. There will be guest lectures, and they just came out with the preliminary agenda. I’m giving a keynote lecture. The main speaker is the head of the Network for Greening the Financial System, which is an organization of about 140 central banks working to future-proof their banking strategies so that climate change and nature collapse don’t cause their economies to collapse. These are some really big names, and I’m a little bit nervous now. I have to talk for an hour and a half, which is longer than any lecture I’ve given before, and I’m having to generate new slides.

Picking up where I said we’re going to dive straight into one of the ecosystem service models explicitly included in the report I’ll be presenting to the Chilean Central Bank: the water yield model. We went through the science of it very quickly at the end of last class. Basically, water yield is precipitation and evaporation, except for the fact that plants play a really big role. Their roots increase the extent to which groundwater recharges into the soil rather than just runs off. But simultaneously, plants are sucking up some of that water through their roots and transpiring it back into the atmosphere.

I want to connect this to the data that you have been collecting for your class. I’ve made some screenshots in the slides if you want to refer back to this. Let me show you how I’ve organized my data. You’re obviously free to do it however you want.

I have a folder for APEC 3611. These are the repositories I used to publish the course website. Before, we’d been using the InVEST base data, where you saw all the different ecosystem services and their data ready to put into the model.

I’ve taught InVEST a lot of times, and almost always the main catching point is people rightly point out that it’s easy when you give me all the data. It’s a little bit falsely easy when all of the data is nicely prearranged. That’s why for this class, I would actually recommend having another folder out here just to keep things separate for your final country report. That’s where you can download your country’s zip file.

I’m going to use Nicaragua, because I believe nobody had Nicaragua. I put that here, and we’re going to be using this today in our InVEST stuff. Just taking a look at it, it’s not organized by model per se, and so one of the key jobs you’re going to do in this country report is figuring out what’s going on here.

As a dedicated researcher, I find that most of my time when I’m actually doing new projects is just looking through datasets, reading documentation. This is a very real skill that you use. One of the files I want to reference is the documentation itself. In Nicaragua it will look a little bit different for each of the different countries, but not too much.

It’s so tempting to ignore files like this readme.pdf. Don’t. This is where really condensed information is. All this data comes from the Integrated Economic Environmental Modeling Platform from one of my good colleagues and friends, O’Neill Banerjee. This will describe what’s inside of these data packets. They’re supposed to be plug-and-play for the four InVEST models that you will have the option of running. It also gives a really good set of descriptions for how you might go about doing it, describing the different sections and notes on the four models. I’d really recommend reading this to get up to speed on what’s in here.

This will describe where the data came from. If you were to have to do it for another region for which you didn’t have a person providing you the data, this is where you would go. It documents exactly how you would do it and get it for yourself.

This is a really useful table showing the four ecosystem service models that these data packets are built to support. It also shows which of the data layers are used in which of the different models. For instance, the carbon storage model will use land use and data on soil carbon storage, while annual water yield will use more data. Please check those out in detail.

For specific countries, many of them have country-specific information about the data there. There are always different challenges when you’re working in many countries. Even when you’re using global data, there might be different complications or omissions of data. You’ll want to read through these. A lot of it will be the same as what’s in the README, but it will note any important things that you need to know for your own country.

I would strongly recommend, if you’re using Google Drive Desktop Sync, copy this over onto your computer. It can get challenging when you’re pointing to cloud-hosted files. The best way to do it is to download it. I’ve downloaded the zip file and extracted it, so I’m not working on my Google Drive.

With all that said, I’d like you to go ahead and open up both InVEST and QGIS. We’re going to dive straight into the water yield model.

As that loads up, here we’re at the home screen. If it loads up into one of the different models and you don’t see this full list, you can just click on the home button and that brings you back to this interface of all the models. We are going to go into the annual water yield model.

This should start to feel familiar now that we’ve done it with two other models. The big difference that we’re going to do today is make sure that our workspace points to your country-specific folder. For me, I’m in teaching this class, so my final report is Nicaragua. That’s where I would recommend setting your workspace.

I’m just going to pause to make sure everybody’s up to speed, because I don’t want to power ahead.

Your file will not be named NIC. That’s for Nicaragua. Yours will have a three-letter code corresponding to the ISO 3166-1 standard for assigning three-character letters to countries.

Do you have it downloaded at least, or are you still finding it?

All right, I’ll start moving. Some of you are still working on getting it. I’ll check back again just to make sure everybody’s there.

One thing to note: we’ve been skipping the file suffix option. This is actually a really useful trick. If you want to run multiple different versions of the model—say you want to test what happens if you use one precipitation layer versus another—this little box lets you put a suffix on the end of all the files. So instead of each new run overwriting the previous files, it will have that suffix on it. You’ll have two different files that you can compare. Possibly useful.

For precipitation, that’s obviously one of the drivers of water yield. This is where we’re just going to get ourselves used to navigating a different file structure, but this one’s pretty straightforward: annual precipitation. Mine is Nicaragua annual precipitation.

For reference evapotranspiration, here’s where we got one. It’s Nicaragua reference evapotranspiration.

Root-restricting layer depth is an important one. This is a geospatial map indicating the soil depth at which root penetration is strongly inhibited because of physical or chemical characteristics. This really matters because it’s how far down to bedrock you have. If you’re on a mountainside that hasn’t had much vegetation growing on it for very long, you’re going to have pretty thin soil, and the root-restricting layer is going to be relatively close to the surface. Essentially, where does it get stony? That information can be derived from soil maps, and it has already been preprocessed here.

Plant available water content depends on the vegetation of the area. What is the fraction of water that can be stored in the soil profile available to plants? This depends on all sorts of different attributes of the soil, but it’s critical in our context because it determines how much percolates into the groundwater versus evapotransports up through the plant.

Land use land cover, we know that one. This is LULC CI, the Climate Change Initiative of the European Space Agency. ESA, the European Space Agency, does a whole lot of interesting research nowadays. They don’t have rockets as good as we have, but they put them to better use. Here you’re going to point to the .tiff file, not the XLS.

For biophysical tables, that’s under Model Lookup Tables. Make sure you point it to the annualwatercci.csv file.

Now it’s going to be a little harder. If you really want to impress me on the final project, you could look up the values for these different parameters specific to academic literature published for your country. Or you could just use the user’s guide, which gives you the typical range. Let’s just choose something in the middle. You could always spend a lot more time justifying this better. If you submit this to peer review and you’re going down the route of being a scientist, this is usually where you get criticism: why did you choose a value of 15? Is that just the middle of what the user’s guide says? That’s not a very good argument. But it might work for now. We’re going to use a Z parameter of 15.

Then we have a few more elements. We are going to give it a watershed vector file and a sub-watershed vector file. Let me talk about watersheds themselves. I’ve referenced this very briefly before.

If you have a country, let’s just pretend this is an island country. Like most islands, it’s probably volcanic, so there’s probably a mountain right in the middle. Just pretend that’s the topography.

If you wanted to know what are the watersheds, and here is the peak, you can think about defining a watershed by asking for some random location. Where would a drop of water flow? It depends on the valleys, kind of flows down, and eventually flows out.

Then you ask, what about some other point? That’s going to go down some other valley. These aren’t stream networks, though it looks like it. At some point the flow accumulation gets so much that it actually does become a stream. But basically, a watershed is going to be defined by the very bottommost point and the whole area that flows into it. It’s basically just a catchment that we talked about before on the sediment retention model.

The cool thing is you can also think about sub-watersheds. This is the whole watershed because it really ends here, but you could also say, what about if we started counting from here? Let’s draw the sub-watershed, which would be the subset of the watershed of all the points that flow into this point, which will then obviously continue on down.

Hydrological engineering comes in here, and lots of different models operate on a watershed-per-watershed basis. You might be thinking, what happens in the chemical mixing as the nutrients and other things mix with all the other chemicals coming in as it travels down the stream and eventually makes it out to the ocean or the reservoir? So we’re going to look at it in terms of both the watershed and the sub-watershed.

For this one, it’s nicely labeled. The first one, you select watershed. And then for the other one, you select sub-watershed. I’m going to circulate around and check on people’s data, but keep powering ahead if you’d like.

Some of the countries aren’t formatted the same way, but so far we’re looking great.

For the optional demand tables, you could run it without these, but we need to find them. I actually thought that was in here. Did anybody find theirs yet? We’re going to run it without it.

For your final project, this is going to be one of the lifting points if you want to include the water yield model: figuring out how to work with this. The user’s guide is going to be essential. I went ahead and clicked Run, so you can do that as well.

For the first time, we’re not all running on the exact same data. This is where you get to discover if your country is large or small. One of the things that much of my professional life has been focused on is getting models to run faster on computers. I chose a very small country with Nicaragua, so it only took 4.18 seconds. But as you get to bigger and bigger countries, no matter how fast your computer is, it starts to become a big data challenge.

Who has the largest country here? Mexico, so you might be one. Are you still running? Okay.

The real reason I wanted to do this in class is to inoculate you all to the reality that geospatial analysis can take time. This is kind of our first stress test of your computers. If you really want to throw in the towel and choose a smaller country, I will maybe allow it. But see if you can do it, okay? Let me know if you have any troubles. It should be able to run in a reasonable amount of time. But if you’re one of those big countries, you might not get results just yet. What you’ll have to do is plug in your computer and wait overnight or something like that.

If you think you’re not going to get to a battery charging facility for your computer, you might want to hit pause. I’ve had that happen before where I’m running a model and I can’t even drive home fast enough to get my computer plugged back in. When you’re using all the cores on your CPU, it uses your battery way faster.

Okay, only one finished? We’re doing good then. Let’s take a look at the results.

This is one of the models where the developers of InVEST have not yet created an automatic report. The other ones had two buttons here: Open Workspace, which opens the folder, and also Open Report, which has that nicely formatted HTML document. This is one where you just kind of have to do it yourself.

What we see here is we chose Nicaragua, or whatever your country is. I might have been wise to put it in a separate folder from the input data, but this one is kind enough to have an output data folder here. That’s where the results are.

I’m going to open up my QGIS and show you the first and most basic thing we might want to do with shapefiles in QGIS. Let’s start with the easy one: watershedResultsWyield.shp. I’m going to drag that over into my QGIS.

Here are all the watersheds present in Nicaragua.

Because we’re also learning the basics of QGIS, let me point you to some of the key things you can do here. Like before, with your raster, if you lost it altogether, you can go to zoom to layer if you’re pointing at somewhere that doesn’t look like it has any data. You can also select specific polygons with this tool here.

Another useful thing is this one here with the little information icon. If you click that and then click in a polygon, it’s going to show you all of the information that we know about that polygon. Maybe too much information about it. But this shows you what this data is containing. It has these polygons, and for each polygon, it records a bunch of extra information. If you become a GIS expert, you start to learn a lot of what these different things are.

I just point you to the end, and notice that we have water yield MN and water yield volume. These are going to be the results that InVEST generated.

I also want to show you how to look at the table of all the results. So instead of just one polygon, what if we want to see results for water yield for all the polygons? For that, right-click on your layer and go down to Open Attribute Table.

What we got is a layer where each row is the data associated with one of the polygons. Before, when I clicked on it, it was essentially just showing this row. But now we’re seeing all the different polygons. Another fun thing you can do is click through this table, and it will highlight which of the polygons it’s representing. Looking here, a lot of this stuff wasn’t generated, but we’re going to skip that. What we really care about is if you scroll all the way to the right.

That’s where we have water yield volume. This is going to say what is literally the volume of water measured—I believe it’s in cubic meters; you can refer to the user’s guide if I’m wrong—but this is saying that given everything we know about evaporation, transpiration, root interactions, et cetera, this is how much flows into the bottom point of each of the sub-watersheds.

I see that some of the watersheds don’t have the sub-watersheds, so you might want to load up that secondary result, the subwatersheds.shp file, and look at it there. But the point is, this is the key result.

Tying this back to how we’ve been talking about ecosystem services: you have the ecosystem structure. From there, it produces, through an ecosystem service production function, some level of biophysical ecosystem service provision. That’s what we have in this column here: the volume of water. The ecosystem service value provision is the thing that you can then, hopefully, multiply by some price or other monetary metric that brings it into economic terms.

Water yield is a biophysical variable, not an economic one. But you might think that for water, it probably does have some sort of connection directly into the economy.

Let me circulate around and see the status of everybody’s model.

Good question: if you haven’t gotten this table up that I have on my screen, just a reminder—go to your Layers tab and right-click on the layer that you want to look at and go to Open Attribute Table.

Some people are asking about the icons for different tools being in different locations on people’s computers. The tool that you want to select—the button you want to hit if you want to select a specific polygon—is the one that has the pointer arrow inside a little box next to a bigger box. This one here. Arrow pointing into a little box and a bigger box. That’s the polygon select feature. Another thing to note is you can select multiple. That can be useful too.

This class is way above average in terms of tech competency. That’s very good.

The last thing I might say, in terms of the sort of bonus training in GIS that you’re getting from this class, is that oftentimes maybe you only care about one sub-watershed, or maybe you have a map of the world and you only care about one country. QGIS gives you a very easy way to create new shapefiles. Say we only care about this watershed, this watershed, and this watershed. If you select the ones you care about, you don’t have to do this—I’m just illustrating a result—right-click on your layer that you have loaded and hit export.

This is where it gives you the option to save features as. What’s really nice is it defaults to geo package instead of shapefile. I threw all that shade at the shapefile format, but I’ve been too busy prepping for my Chile trip to actually fix the data layers for you. So I guess I’m a hypocrite. Either way, when it saves it, it will be in a geo package, but that’s what you could do if you like it old school.

Any questions? Opening the feature table: select your layer, go down to Open Attribute Table. Yeah, it’s a little bit different than ArcGIS. They call it attributes instead of features.

Unless there are any questions, I want to pivot to the topic of what do we do with this? We got the volume, right? What might we want to do if we are going to a policymaker and making an argument that some sort of environmental restoration program is or is not worth it? Basically, we’re trying to do a cost-benefit analysis, but now properly including all these other values that are either ignored as externalities or simply ignored because we don’t even know the value. That’s worse than just an externality. An externality, you can at least see as something that we don’t have people valuing, but we know it’s there. This is even worse—an invisible value.

The valuation method from the ending point of InVEST is really straightforward because it’s basically market plus physics. We have a pretty clear idea of why we spend so much money having our Army Corps of Engineers building dams. When you create a dam and put it at the bottom of a watershed or sub-watershed, you then know that based on how much water you let through, some subset of this is going to fill up to the height of the dam.

In principle, if you had a super high dam that was higher than the elevation around it, it would eventually fill up the whole thing. That would be a pretty bad idea because now you’d have a really massive spill on your hands. But the point is, however high we build this dam determines how much reservoir capacity there is. That captures the results we just computed in InVEST. As we’ve got our water volume for each one, we’ll need a little bit more information then on what is H sub D—the water height behind the dam at the turbine.

There are some other coefficients in here, like gravity. That’s the acceleration approximately 9.81 meters per second squared. A lot of calculations are based on that. Gravity determines how much electricity we can get. Also, rho, this is water density. By definition, it’s 1,000 kilograms per cubic meter, because the actual definition of density comes from water’s density.

That’s a little bit of what goes into the calculation. You would essentially need to get information from the government agency responsible for managing the dams. That’s actually pretty easy to get.

There are a few additional things that are a little harder. You need to know about the turbine efficiency: what is the percentage of inflow water volume at the reservoir that will actually be used to generate electricity? Most dams let through a large portion of the water without ever using it to generate electricity because there might be more water flowing through than you have the capacity to put through the generator.

The final calculation would be all of the economically relevant components. What do we get? The price of electricity multiplied by this epsilon D that we calculated above, which came from the previous two equations. That’s just price times quantity minus total costs. Now we’re back to basic econ, right? This is total revenue minus total costs.

Then we’re going to add one extra term: the discounting factor. We’ve seen this a bunch of times. However far out into the future we are thinking that this flow of value is going to be accrued, this term will get larger, which makes this whole term get smaller. This means that dollar values in the future, farther out in time t, are going to be worth less. This gets us the net present value of hydropower at that dam.

What makes this relevant to policymakers is that if there’s ever a cost-benefit analysis where something is going to disrupt the hydrological cycle—or maybe we’re going to divert water to irrigation and it doesn’t make it into the hydropower dam—a model like this lets us more accurately compare the costs and benefits.

Has anybody heard about the project to pipe water from Lake Superior down to Arizona? That’s a big example of this. Huge diversion projects. I don’t think that one’s going to go through, but this is something that actually does happen. There’s going to be an example of local costs and benefits differing from the larger scale costs and benefits.

That’s the valuation method for hydropower. Which leaves us just 10 minutes to talk about the very last one I want to talk about: the value of a statistical life. What’s kind of nice is that in calculating the value of a statistical life, which we will henceforth just call VSL, you’ll see that a lot. We already have the basic methods, and that’s going to be hedonics in most cases.

You might be thinking back to hedonics. We talked about what’s the value of a lake. We talked about how different amenities on the lake—is there a boat launch, how clear is the water—looking at the changes in the clarity of the water and how that affected sale prices of houses on that lake, gave us information about how much people cared about water quality. That’s hard to get. We can do the exact same approach to get how much people care about a life.

I don’t know. Some people have cognitive dissonance putting a dollar value on a life. Does that seem wrong to anybody? When you put it in a specific person’s terms, it becomes a really hard-to-assess thing. But the fact of the matter is, the government does this calculation all the time. What’s sort of interesting is different agencies have different numbers. The Pentagon has a much lower price or value that they put on a statistical life, and it’s actually quite relevant to them because they do lose lives. They don’t use hedonic analysis; they use replacement cost—essentially, how much does it cost to train up a soldier? I can sort of see the logic at least from a decision-making metric.

Oftentimes, though, we’re not the military, and we’re caring about environmental things. How can we do it without looking at it from the perspective of replacement cost? Well, there’s been tons of really awesome academic literature on the point that you can use people’s observed market behavior in job markets to determine how much they care about, in fact, their own life.

This comes from the fact that, just like with a house or a lake, there’s going to be a big bundle of different attributes that they care about. For a job, it might be what are the responsibilities, do they get to be a supervisor, do they need to travel, can they work from home? That’s a big one now. What are the hours? But then, critically, there’s this one that’s kind of unique: risk.

We don’t think about this too often. A lot of the jobs that people who are college-educated take on tend to have essentially zero risk. But there is actually a ton of data about risk in jobs. Things like working on an oil rig or driving a truck through a war zone have risk. There’s Ice Road Truckers—a risky job. Another one is The Most Dangerous Catch, where they are catching fish in the North Sea. That’s also very risky. Any job has some sort of inherent risk of dying.

We can leverage this fact and all the different observations on how much these riskier jobs pay their workers to determine, in the same way that we used it for house prices and nature, how much a human’s life is worth.

Basically, the way it works is you collect data on all those job characteristics—the things that might matter, like are you a supervisor, what are the hours? It’s kind of like how many bathrooms in a house or whatever—because you’re trying to isolate the effect of the risk.

You use a statistical model that predicts wage as a function of education of the workers, physical attributes, hours, distance, work from home—a big one. But then you get all these things. Hopefully you’re going to describe all of the attributes that go into predicting wage. But then one last one: risk.

This is what we call the coefficient of interest if you do statistics. Assuming that you’ve got all of these things correctly identified and that you didn’t leave anything out—if we left out hours, that would be really bad because now this estimate would be picking up on that—if we’ve got this well specified, then we can look at how different jobs with different risk levels affect the wage.

If there’s a 1% increase in risk, we would be solving for the amount by which that 1% increase in risk would lead to a whatever percent increase in the wages.

This relates to environmental economics because all sorts of different things that we do—such as cleaning up the air or cleaning up a toxic spill—oftentimes much of its value isn’t through ecosystem services like sediment retention, but simply through the fact that it keeps people from dying.

I’ll give you one example: the HERC, the Hennepin Energy Recovery Center. It burns garbage and is very unfortunately located right in the middle of the city, right next to a bunch of low-income residents. There’s a massive environmental justice aspect about this.

How would we use the tools of VSL to establish the costs and benefits? We know the benefits—it stores our garbage and burns it—but what are the costs?

This is a stylized example, but suppose we would have 10,000 people that are exposed to this. The policy, in this case getting rid of the HERC, would reduce mortality risk by some amount. If we further knew from our hedonic analysis that people value a risk reduction of 0.01 at $200, you can then simply multiply two different things.

The number of residents multiplied by the risk—I’ve sort of jury-rigged the numbers here so that it comes out nicely—is that this risk with that level of exposure leads to one statistical life saved by doing the policy.

The second thing we need to know is the value of that. With a $200 willingness to pay, all 10,000 of those residents, presumably if we did this right (and that’s the hard part), are the same. This suggests that the value of a statistical life multiplied by the one statistical life saved says that getting rid of the HERC (these are numbers that are obviously not real) would let you do cost-benefit analysis.

If the replacement of the HERC costs more than the statistical life saved multiplied by the value of a statistical life, that’s useful information. It doesn’t say anything at all about the environmental justice component, and that’s where there’s a real caution. But it’s just a good example of how you can use this concept in some really meaningful debates.

What I’ll end with is that there’s tons of data. We’ll return to this at the beginning of next lecture. There’s really rich literature on what is the value of a statistical life. Just so that you don’t leave without this information, it ranges from a little bit less than a million dollars per life to higher-end values looking at $20 million. That’s how much your life is worth.

Have a good Monday, and we will see you next class. If you had any questions about your data running through your country, it looked like everybody was pretty successful. This was also to troubleshoot your country. Either way, feel free to reach out to me. Now’s a good time, and I will see you all later.

Transcript (Day 4)

Alright, well, let’s get started, everybody. Welcome to Day 4 of talking about valuation methods. We will today finish up VSL, or the Value of a Statistical Life, and then we are going to play a game. I’ll pass these out in a minute.

We are going to play for really bad Keurig coffee things. I’ll explain more in a minute.

Then we will discuss the policies that arise from different estimates of the value of a statistical life, as well as the other methods we’ve been talking about with ecosystem services, and more generally, the concept of non-market valuation.

So where we left off yesterday is I talked about the very large literature that exists on this. These are all published, peer-reviewed studies, as I mentioned, that are seeking to establish the value of a statistical life. The main thing to notice here is this method. I didn’t put enough emphasis on this last class. Almost all of these come from the labor market, though a handful are contingent valuation.

Using the same terminology we talked about with ecosystem services, contingent valuation is essentially going to be some form of asking people. But if you ask people information about a topic as critical as literally life or death, it is especially hard to get accurate measures. That’s why most of the work, especially as you move to more modern stuff, has been heavily focused on analyzing the labor market, specifically how people accept risky jobs and whether they need to be paid more to accept those.

We walked through one example last class, but I want to give you a second one really quick, because this is going to be closer to the game that we’re going to play in a minute. This is also more typical of the sort of VSL estimation approaches out there.

It’s going to be focused on workers. Suppose there’s a worker who must be paid $5,000 more to accept a job that is risky. I’m going to record all this information down here so we can know how to use it. The willingness to pay—and I’m going to put in parentheses so we remember—for risk premium. Another word for premium is markup. This is what we want to establish: not the willingness to pay for taking the job, but the difference in how much somebody would have to accept to take a risky job versus an unrisky one.

Here, the willingness to pay is $5,000 to take a riskier job. Suppose also that this job has a probability of 1% death. Which also means that if you have 100 workers on average, one of them is going to die.

We’re going to be combining these two bits of information to get us the statistical life valuation. If each of these workers who actually take on the risky job are only accepting this extra risk if they get this $5,000 extra, and given the logic that if we have 100 more workers, one will die, then the way you calculate the value of a statistical life is simply $5,000 times 100, which gives us $5 million for a statistical life.

In my eyes, this is the better way of putting a dollar value on things that are risky rather than asking people. This is because, if you do it for real, it’s based on observed information in the marketplace. There actually are people working jobs, and you can indeed extract from the different prices paid that people do have a price premium they put on their life to take on these risky jobs.

But I want to drive this home with a little bit more specificity. First off, I’m going to pass out abalone. These will be something that will help us play a game about abalone. How do you say that? Does anybody know? Abalone is a delicacy. Has anybody ever eaten abalone?

I hear it’s very, very tasty. It’s a delicacy, and for us here today, it is going to be an example of a job that is very risky. Here’s a picture of them. They’re shells that are underwater, and you pry them open, they give a very tasty delicacy.

The challenge is they can be quite dangerous to harvest. In this particular experiment, I get to be the captain of this ship, and you are all the labor market. I am the one who already has the boat and everything. The thing I’m lacking is divers. To get abalone, you dive underneath water, and oftentimes they are in very murky water, so you’re literally feeling around with your hands to find them.

I’m not going to take the risk because I own the boat, right? I’m rich. Instead, I’m going to be paying people to take on the risk of diving into the water, feeling around with your hands in the rocks, and trying to find the abalone. In particular, I am wanting to hire 4 people. That’s my goal, and that’s how much I think I can bring to market and properly sell.

First off, this is going to be our ocean. I was trying to find things in my office that feel the same but look different, and I once bought myself, by accident, flavored Keurig. Have you ever had flavored Keurig? They’re awful. More importantly, they made my office smell awful, and everybody was commenting on it. So I’m not going to eat these or drink these. They will be our fish, and in particular, we have good fish and bad fish.

The good fish—anything that is vanilla, caramel, or mocha—are considered abalone. We like them, and they’re very tasty. Maple pecan, the worst possible coffee flavor of all in my opinion, is going to be our bad type of fish. Our goal is to harvest these abalone.

The way this classroom experiment is going to work is using the sheet that I have, which gives a little bit more context. On the back, you’ll see a set of numbers. You will essentially write your name on it and choose a wage that you would be bidding in order to become one of my divers.

We’re going to play it with 3 different rounds. You’re going to choose a wage somewhere between $1 and $5, which is about how much you’ll get paid to be the diver.

The way the game works is I need to get 12 abalone altogether. So we’re going to have 4 people who will do the job, and each one of those has to get 3 abalone to be able to get their payment. You only get the payment if you actually succeed at doing the job, which is getting the 3 abalone.

Assuming you do, I will then give you the wage. This time, it’ll be paid out in class points, right? So that means you can skip some homeworks, or go back and have homework where you didn’t get 100% or it was late, and you can boost it back up to full value. If you, for instance, bid $3 and you successfully complete your job, you will get 3 class points.

This is why I make class points: because I don’t want to be constantly bringing in money for everybody. The previous professor actually did bring in money, but he only paid small, tiny sums, and I don’t think that motivates people to act in a rational way.

As a stress reducer, if you’re afraid of fish, you could just bid $5. The market will clear somewhere below there. If you don’t want to participate, you can just circle the $5 indicator.

Before we get to that, I want to talk about something about the bad flavors. Suppose this fishery—this is totally made up—also has electric eels. These maple pecan Keurigs are going to be the eels. If you unfortunately draw one of those, you will be instantly killed by electrocution. This is a very dangerous job, and you obviously won’t get paid if you get killed, so you won’t get your class points. You’ll get to continue in the class—it’s not like you’re going to flunk out or something.

But there is going to be a different population of fish versus eels in three different areas that we are going to fish in. In Area 1, it’s going to be all good abalone. You have a 100% chance for each one you fish of getting a good one.

Then we’re going to start to go into risky waters. In Area 2, 10% of the creatures in our little ocean here are going to be the eels. Remember, you have to draw 3 times. Your chance of death would be 1 minus 0.9, because 0.9—90% of the time, you’ll get one of the good ones—but 0.1 is when you get electrocuted. This is how you calculate the odds of getting electrocuted over 3 times: you multiply 0.9 times 0.9 times 0.9 to get a 27% chance of being electrocuted in Area 2.

In Area 3, 30% of the creatures that you are going to be drawing from are those eels. Your chance of death would be 1 minus 0.7 times 0.7 times 0.7, which gives you a 66% chance of dying. Pretty bad job. I’m really stacking the deck against you all here. But you might gain a lot.

The thing I’d remind you from microeconomic theory—which you learned back in 1101—is that this can be easily represented as a labor market. On the vertical axis, this would be the wage that we need to pay: $5, $4, $3, $2, $1. On the horizontal, we’re going to have number of workers.

In principle, we know one thing for sure: I want 4 workers. That’s not going to change. I’m a single firm. We’re going to have a labor demand curve that looks like this.

In the labor market, you know, the firm is actually the demander because the individuals who are offering up their service are the producers, and thus will determine some relationship here. I’ll put it in dotted lines because this is actually what we’re going to figure out. This is the labor supply.

As a profit-maximizing fishery owner—I guess a profit-maximizing captain—I know that to get this, I’m going to have to choose a wage such that I can actually attract 4 workers. If I put a wage too low, I will get only one worker. If I go too high, I won’t maximize my profit. I’m going to try to set the wage where supply equals demand. That’s very basic micro that we all know.

So, there will be these 3 different areas with different probabilities of selecting eels. Let’s go fishing. The way it’s going to work is I’m going to say “circle your number,” and then I’ll circulate around the room and record the different values that people have. Based on that, I will then set what wage I’m going to offer. If you are one of the people who bid according to that number, you’ll be selected, and then you have to come up and draw.

One thing I left out is, if you bid below the offer, so if you say you bid $2 and I say I’m willing to pay $3, you’re one of these workers who’s really happy, right? Because you’ll get the $3 price. We’re assuming a competitive market where I can’t offer individual prices to individual people. If you bid less than that, you’re saying you do want to be one of the fisher people. Class points will change hands here, but only if you survive.

First off, let’s do Area 1. We’re going to have all 9 of them being good ones, meaning they’re vanilla, mocha, or caramel. Now, I would like you to write down what wage you would be willing to accept. Remember, if you get it and you draw your fish appropriately, you actually get those class points.

Any questions? Okay, so now let’s solicit some information for Round 1.

Is everybody done? Made your choice?

Who bid $5? Okay, good. Who bid $4? Who bid $3? Who bid $2? Who bid $1?

Alright, you’re definitely selected. We have 5 people who would be willing to work at $2, and I’m thinking of a number between 1 and 100, shout out.

76… 80… 25. It was actually 50. So you’ll be that, and you were the next closest. Actually, weren’t there more people? Okay, you were the only ones that spoke up, so you all get it. You four will be my fisher folk. Please come up and select your fish. Just 3, right? Draw 3.

Great. You got 3 vanillas? Throw them back in. I guess this is kind of dumb, right? Because there’s no chance of anybody dying here, and so we gotta do the labor anyway. Okay, so you got caramel—they’re a slightly different color abalone. We’re just building skills. Yep. Just so that the labor actually happens—we got 3. Okay.

You all got 2 class points. I’ll write that down after class.

But now it’s going to get a little dicey. There are 9 in here, and I chose that because now when I add this one maple pecan, there is a 10% chance of drawing the electric eel. Maybe it changes how much you’re willing to bid.

Okay, so I’m going to ask you now. We’re going to do Round 2. How much should you be paid in order to become an abalone diver here? So go ahead and write that down.

Okay. Who needs to be paid $1? Nobody. Okay. Who needs to be paid $2? You have a high risk preference. Who needs to be paid $3? Who needs $4? Who needs $5?

Okay, so you’re definitely in. Now, of the people who bid $3, I’m going to randomly choose those closest to this side of the classroom. You will be our three additional fishers. Please come up. It’s randomized. Okay, come on up and draw your fish. No looking.

Oh, no. Oh, no. You died. I’m sorry. It’s okay. I tried to inform you. You have to leave the class now. No, I’m just kidding. It’s true.

Can I grab it? You’re good? Okay. You earned 3 class points. You have 3 class points, nice. This one feels good. Okay, doing it dramatically one at a time. This one. It’s soft. That feels like another one. You did? You’re good. Okay, so all 3 surviving people get 3 class points.

But now things are going to get much riskier. I’m going to take out 2 of the abalone and replace them with eels. Now there’s going to be 3 eels and 7 abalone. It’s going to be much riskier. For this one, we’re going to do the exact same thing. Think to yourself: what amount would you need to be paid to do this? Keep in mind that if you die, you don’t get anything.

Everybody think about that and write down a number.

Alright, who wrote down $1? Okay, good. We don’t have strange preferences. Round 3. Anybody write down $2? Anybody write down $3? Beautiful, right? Anybody write down $4?

So what do we see? People need to be paid a little bit more. If you paid $3, you are definitely going to be able to come up and fish. If you bid $4, whoever is closest to that side of the classroom, raise your hands again if you bid $4. You—okay, come on up.

Now you got a 67% chance of electrocuting yourself here. Go ahead. But the rewards here are going to be more because you would now have this price of 4 class points, right? I had to pay more. Don’t hold it too low if you just promise not to look. I’ll look away.

Alright, survived. Survived. This is the scary one. Oh no! Electrocuted! You get nothing. Alright, that’s beautiful.

Oh no, electrocuted! One eel—that’s all it takes. Wait, who’s my fourth person? You also died. So who else bid the price of $2? What am I missing? Did somebody change their bid? Who bid $3? You bid $3. Okay, so you also need to come up with a job. Good. Three total. You good? Alright, you’re good. Alright, so we had two people survive.

And that is now the end of our simulation. I’m going to take this information and plot it out. We’re going to actually get our supply curve, but let me show you some results from previous years.

In this particular case, we have a supply curve charted out simply by how many units of labor were there at each of the different prices. At a price of $1, when we were in the safe fishing grounds, we would have one there, and then we’d use the rest of this information to draw out the curve.

But what we see in this particular one is two things. Number one, we have an upward-sloping supply curve, so this satisfied the law of supply. Also, the people who were bidding here didn’t require as much compensation for the safe fishing grounds as they did for the higher risk fishing grounds.

Let me pop out a PowerPoint for a second. The other professor and I have been keeping fastidious notes on this—not quite enough to publish a journal article, but we have data from 2025, 2024, 2023, 2022, 2021, and so forth. Different sized classes, but we get the same exact results. When people are playing this game, even with these relatively low stakes, they need more compensation to be able to accept this level of risk.

What we will do today is talk about how you can take data like this—whether it’s generated here in class or whether it’s coming from analyzing labor statistics reported by the government—and how you would go from this to an actual price of the value of a statistical life.

Let me use the 2025 results, where we generated a value of a statistical life. Fishing Grounds 1 had a market-clearing wage of $2, and we’re going to compare it to the riskiest. In Ground 3, the market clearing price was $3.22.

How do we go from this to the value of a statistical life?

Step one: Calculate the increment. In this case, $3.22 minus the risk-free version equals $1.22. Then, you need to combine information on the wages with the actual risk metrics. In Ground 3, you had a 66% chance of dying based on the distribution of eels. So you multiply it by the inverse of the risk.

This is saying that with this risk rate, if you have more than 3 people, you’re going to have 2 of them dying. We multiply that together to get approximately $1.52. On the actual numbers, you get a value of a statistical life of $1.85 million.

This is just working through in more detail exactly what we talked about at the end of last class: hedonics. What it does—whether you’re looking at houses and the value we place on access to fishing grounds or a beautiful view overlooking a park, or if you’re actually talking about the risk from doing a potentially life-threatening activity—hedonics is a very powerful way to elicit these prices. I think that’s kind of cool and has been used extensively throughout the literature.

I did put them in from all the other years. Here are the results for different slides.

To summarize, the key numbers you need to know are: the number is 1 over the probability of death. That’s the first thing you need to know. That’s the risk metric. This can also be expressed as the number of people for which we’d expect one death. That’s what the 1.5 is when you do that.

In the abalone game, the difference between grounds 1 and 3 is summarized. The two numbers are then calculated by multiplying each other, and that’s where we get our result. This is the value per person, but we need to then multiply by the numbers together to get a higher value.

Any questions on that and the value of a statistical life? I want to talk about downsides.

Some people think this is immoral, but it’s actually used by our government extensively in different approaches. The policy that I want to talk about that illustrates this best is air quality.

Air quality—we have many types of air quality. The EPA often refers to the key six pollutants. One of the most important ones in that subset is PM2.5. This is particulate matter of less than 2.5 microns. This comes from all sorts of sources, but primarily from emissions you get from burning coal. There are other pollutants out there too, but PM2.5 is of particular importance because it has some of the worst health impacts.

Essentially, you get particulate matter in your lungs, and this causes premature mortality. What we have here is one of the seminal articles that goes through the epidemiological evidence and then multiplies it by the value of a statistical life to say how this shakes out when you are thinking about it in terms of mortality times that price.

First, they did a literature review identifying what was the evidence on the amount of deaths per 100,000 at different concentrations of PM2.5. The way you can read this is: 13 micrograms per meter cubed versus 12 versus 11. Avoided mortality is saying that if you reduced to this level, how many people would be saved from dying? So, comparing it to like, what is the baseline level of pollution—something higher than 13.

In each of these cases, they found similar results. The adult mortality ones showed that if you reduce it to 13 micrograms, you would save 140 lives. When you reduce it further to 12, it goes up to 460, and if you reduce it even further, it goes to 1,500. This is the sort of basic relationship between the benefits of reducing it, but you might be thinking back to our costs of abatement lecture, right? It’s probably more expensive to get to these lower levels of abatement, and so this is where the cost-benefit thinking comes in.

Other studies found similar results looking at infant mortality, though much smaller numbers. What’s kind of nice about this one is we also have a lot more data on other things besides just avoided mortality. They also looked at non-fatal heart attacks, hospital admissions for various things, emergency room visits, and lost work days, among other things.

Has anybody ever had an asthma response to pollution? It’s kind of scary, isn’t it? I was traveling in Africa, and they burned their garbage there, and they all burned it at the same time right after work lets out. I just started hyperventilating. I couldn’t get in enough air, and this was really scary. I was fine, and you obviously made it here to class.

These are other things that we also might want to put a value on. I certainly didn’t like doing that. We’ll talk about how in a minute. But the basic idea is you can take the value of a statistical life and multiply it by the avoided mortality and then get a monetized value for each of these different concentrations.

On a more lighthearted note, after all this talk of death, I did also find a fun paper about putting a value on a statistical dog life. Using contingent valuation in this case, they were able to find that the value of a statistical dog life is $10,000. Here they use contingent valuation, where they just essentially asked people how much they would pay.

As a check to see if you’re following along: why couldn’t they use hedonic analysis to get this? It’s kind of a silly question, but what’s going on here?

Dogs don’t work. They don’t make a choice about whether or not they go into the labor market. So we unfortunately can’t observe dogs’ behavior because they don’t accept money for their labor.

Okay, so summing it all up—although I’ve been extolling the virtues of hedonic valuation and thinking about the value of a statistical life, and that’s basically because it’s based on observed wage-risk trade-offs—it’s still worth thinking about all the cons. I think the biggest one here is the assumption of perfect information, right? We’ve talked about this one a bunch of times when criticizing the assumptions of the free market, but do workers really have accurate information regarding the risks that they face?

There’s all sorts of evidence out there that, especially when risks are relatively small, we systematically underestimate them. This one is very likely, at least based on behavioral evidence, to not be true.

Other problems? There’s a real fairness issue here. Workers who bear the risk may not be representative of the broader population. What if you have a bunch of risk seekers that really love the risky life? Think about oil workers on an off-sea oil derrick. There’s sort of a machismo culture, almost, about enjoying that sort of lifestyle. Another example would be cowboys—the stereotype of people living dangerous lives for macho reasons.

Other problems: this all assumes that their willingness to take risks is the same in that risky group as for the general population. If it’s not, then we might be overstating how much value there is.

Another problem is that there are externalities not being considered. If somebody dies, they bear the cost of not being there anymore, but this also has huge externalities on society: their loved ones that miss them, the fact that they’re no longer working in that job, etc.

But probably the hardest one—and this is where the most debate on the topic is—is what about age? Should the death of a 90-year-old somehow count differently than the death of an 18-year-old? I want to see a show of hands. Who thinks that the value of a statistical life of an 18-year-old is worth more than that of a 90-year-old?

Anybody think it should be the exact same?

What’s your argument for the same? You can’t make that generalization about a person. You don’t know for sure that the 18-year-old’s going to really live longer.

Does it feel wrong too? Is that a little bit a part of it?

Yeah. When you ask people this question in statistical studies, it’s actually about half and half. You all are a little bit more in favor of putting a lower value on older people, but it’s incredibly politically contentious.

There were actually some policies implemented by governments that started to introduce this concept, and the amount of pushback they received was huge. People argued that you can’t put a different dollar value on people’s lives. All people are created equal. It was very politically contentious.

In terms of the economics though, one thing I will point to is that I think quality-adjusted life years, or QALYs, has been seen, at least among economists, as being a good way of thinking about this question. Essentially, you have two different variables to consider when somebody is approaching death: number one, when do they die? And what is the quality of their life as they progress?

With this, the benefit is the area under the curve, right? It’s sort of like consumer surplus, but a bit more morbid. Someone who died early might have had perfect health for a while and then something really bad happened. They were exposed to mercury or something like that. They didn’t die right away, but their life was significantly reduced in quality and got worse until they eventually died.

Versus some other case where somebody has a different trajectory: they both live longer, and throughout most of their lifespan, they enjoy a better quality of life.

Quality-adjusted life years is a way of saying: let’s take the value of a statistical life that we got from our methods before and assign it a weight based on what the quality felt throughout that. What’s nice about this is if you have a policy that increases safety and thereby reduces the likelihood of death, that will show up as an increase in quality-adjusted life years, simply because you’re shifting from early death to later death.

Alternatively, if you have an intervention like cleaning air for indoor air quality purposes, that will still get registered here. Even if it doesn’t have an impact on death per se, if it increases the quality throughout those years that the person’s living, then it will get a portion of the value of the total statistical life.

So, okay, and with that, we are done with the valuation components, which is a big part of this course. Any questions?

I have a question: does anybody want this coffee? It’s free. You can have whatever flavor you want. This is free for the grabbing. I truly hate this coffee.

We’re a little bit early. We’ll call it early today and have a good rest of your day. Feel free to grab any coffee.

What would you choose? I think I would have gone with $2, $3, $4 is what I would have done. That’s exactly what that is. Oh, okay.

Also, we could take an example of mercury. Where I had our inventory number. Yeah, a long time ago, mad hatters—people who made hats—suffered from mercury poisoning. Mercury was used for blocking the hats, I believe. That’s correct.

It was all the hazards I sent out: chronic mercury exposure. It was really bad.

You want to know the most morbid example I know of? Chimney sweeps. In Victorian England, homeless kids were employed as chimney sweeps because they were small enough to fit down the chimney. Their expected lifespan was 7 years. They would die at 7. Yeah, think about cleaning 10 chimneys a day that are so covered by coal. I wouldn’t be able to breathe. You could hold your breath for a really long time, though.

So, with that, I’ll see you next class.