Lecture 06 - Inclusive Wealth

Reading: Polasky et al. 2015

Slides as Powerpoint: Download here

Video link: On Youtube

Content

The Midterm Decision

The class began with an important administrative decision regarding the course structure. The instructor raised the question of whether to include a midterm examination in the course. In previous iterations of the class, a midterm focused on mathematics and theory had been standard practice. However, this current version of the course has been redesigned with a different pedagogical approach, emphasizing the development of practical tools and skills necessary for conducting research in environmental and resource economics rather than pure theoretical knowledge.

The instructor explained that when attempting to write the midterm, it became apparent that creating traditional problems, such as solving the Nordhaus DICE model mathematically, would not align with the course’s current emphasis. While such theoretical exercises have their place in economics education, particularly in courses like APEC 8601 which maintains a strong focus on theory, this course prioritizes practical application and hands-on learning experiences. After conducting an informal vote among students physically present in the classroom, the decision was made to eliminate the midterm examination.

This decision necessitates a restructuring of the course grading scheme. The problem sets will now carry increased weight in the final grade calculation, reflecting their importance as the primary assessment tool for the course. The instructor noted that the first two assignments were primarily setup exercises and did not reflect the level of effort and engagement that will be required for subsequent assignments. The research-related products that students produce throughout the course will also receive increased grading importance, aligning with the course’s focus on developing practical research capabilities.

Introduction to the Earth Economy DevStack

The Earth Economy DevStack represents a sophisticated computational playground designed for linking various economic and environmental models. This platform emerged from the Natural Capital (NATCAP) teams within the department and serves as a central hub for integrated modeling approaches. The DevStack brings together multiple modeling tools that have traditionally operated in isolation, enabling researchers to explore complex interactions between economic systems and environmental processes.

The platform incorporates several key components that students have been introduced to throughout the course. INVEST, which focuses on ecosystem services modeling, provides detailed spatial analysis of how natural systems provide benefits to human populations. The GTAP (Global Trade Analysis Project) model offers a computable general equilibrium framework for analyzing economic impacts across sectors and regions. Additional tools within the DevStack facilitate the integration of these systems. SEALS (Spatial Economic Allocation Landscape Simulator) provides land use change modeling capabilities, while GTAPI automates various aspects of CGE analysis.

The true power of the DevStack lies not in these individual components but in their integration. The platform enables researchers to link GTAP and INVEST models, creating feedback loops between economic decisions and ecosystem service provision that would be impossible to capture with either model in isolation. This integration capability will be leveraged throughout the remainder of the course as students learn to conduct sophisticated integrated assessments.

Visual Studio Code Configuration and Setup

Visual Studio Code represents a paradigm shift from traditional, language-specific development environments. Unlike RStudio, which focuses exclusively on R programming, VS Code functions as a universal integrated development environment capable of supporting hundreds of programming languages through its extension system. This flexibility makes it an ideal tool for the interdisciplinary work required in modern environmental economics research, where projects often involve multiple programming languages and diverse analytical approaches.

The instructor demonstrated the process of launching VS Code and explained its fundamental nature as an IDE. An integrated development environment combines a sophisticated text editor with extensive functionality that brings together various development tools. This integration provides researchers with a unified workspace for writing code, managing version control, debugging programs, and visualizing results. The comparison with RStudio helped contextualize VS Code’s capabilities for students already familiar with that environment. While RStudio combines the ability to run R commands with visualization tools like variable explorers and script editors, VS Code extends these capabilities across all programming languages.

The debate within the department about whether to transition the R course from RStudio to Visual Studio Code reflects broader trends in the computational research community. The movement toward VS Code is driven by its superior power and flexibility compared to single-language IDEs. While RStudio excels in its focused support for R programming, its limitation to a single language and lack of robust GitHub integration increasingly constrains researchers working on complex, multi-language projects. The decision to maintain RStudio for the introductory R course while introducing VS Code in this advanced course represents a pragmatic compromise between familiarity and capability.

The instructor’s personal workflow exemplifies the advantages of adopting VS Code as a primary development environment. Despite conducting substantial R programming work, the instructor has not needed to install RStudio on their current machine for two years, demonstrating that VS Code can fully replace language-specific IDEs for experienced users. This transition illustrates the tool’s maturity and comprehensiveness in supporting diverse programming workflows.

GitHub Integration and Repository Management

The integration between VS Code and GitHub represents a crucial component of modern collaborative research workflows. The process begins with connecting VS Code to a GitHub account through the authentication system. Students were instructed to locate the GitHub icon in the lower left corner of the VS Code window and initiate the sign-in process. This authentication creates a persistent connection between the local development environment and the remote repository hosting service, enabling seamless code synchronization across multiple devices and collaborators.

The authentication process involves a browser-based OAuth flow that securely establishes the connection without requiring users to enter credentials directly into VS Code. Once authenticated, VS Code gains the ability to push and pull code directly, eliminating the need for separate command-line operations for most version control tasks. This integration extends beyond simple file synchronization to include features like settings sync, which allows researchers to maintain consistent development environments across different machines, including desktop computers, laptops, and even browser-based instances of VS Code.

The instructor emphasized the importance of proper file structure organization when working with Git repositories. Students were instructed to create a standardized folder structure within their user directories, specifically an APEC8602 folder that would serve as the parent directory for all course-related repositories. This standardization prevents path-related errors in future code execution and ensures consistency across all students’ development environments. The actual course repository, named “APEC 8602 2025,” would be cloned as a subfolder within this parent directory, maintaining a clear hierarchical organization.

The cloning process itself was demonstrated through VS Code’s command palette, accessed via Ctrl-Shift-P on Windows or Command-Shift-P on Mac. This universal command interface provides access to all of VS Code’s functionality through a searchable list, eliminating the need to memorize menu locations or keyboard shortcuts for less frequently used commands. When searching for “git clone” in the command palette, students could choose between entering a repository URL directly or browsing their GitHub account for available repositories. The ability to search for repositories by name, such as “jandrewjohnson/APEC 8602-2025,” streamlines the process for users managing multiple projects.

Troubleshooting and Security Considerations

The class encountered several technical challenges that provided valuable learning opportunities about development environment management. One significant issue involved VS Code’s restricted mode, a security feature that limits code execution capabilities when opening untrusted workspaces. This security measure, while important for protecting against malicious code, can create confusion for new users who may not understand why their code fails to execute properly. The resolution requires explicitly trusting the workspace, a decision that carries security implications students need to understand.

The instructor’s anecdote about a six-month dispute with the university’s IT department over administrator rights on a high-performance computing system illustrates the ongoing tension between security requirements and research needs. The situation involved a supercomputer with 128 cores and a terabyte of memory that IT wanted to lock down, preventing the owner from having administrator access. This restriction would have severely limited the machine’s utility for research purposes. The resolution, which involved treating the machine as an isolated system with only network access, demonstrates the need for researchers to advocate for appropriate access to computational resources while acknowledging legitimate security concerns.

A common point of confusion arose from the distinction between the search box and the command palette in VS Code. While both appear similar, clicking on the search box at the top of the VS Code window initiates a file content search rather than opening the command palette. The subtle difference between these interfaces caused initial confusion for several students. The instructor clarified that typing the caret symbol (^) in the search box converts it to the command palette, providing an alternative route to this crucial interface. Understanding these interface nuances is essential for efficient VS Code usage.

The discussion of trust settings when opening repositories highlighted important security considerations in collaborative development. When VS Code asks whether to trust the authors of files in a repository, users must make a security decision. While the risk of malicious code in academic repositories is generally low, the potential exists for code that could damage or delete files if executed with full permissions. The instructor acknowledged never refusing to trust a repository but emphasized the importance of following good internet hygiene practices, avoiding suspicious sources, and maintaining awareness of phishing attempts and other security threats.

Julia Development Environment - Extension Installation and Language Support

The process of configuring VS Code for Julia development begins with installing the appropriate language extension. VS Code’s extension marketplace provides access to hundreds of language-specific and tool-specific extensions that transform the base editor into a specialized development environment. The official Julia extension, maintained by the Julia Language organization, provides syntax highlighting, code completion, debugging capabilities, and integrated REPL functionality.

The installation process involves navigating to the extensions icon in VS Code’s left sidebar and searching for “Julia” in the marketplace. The official extension should be selected, distinguished by its publisher being listed as “Julia Language.” Once installed, the extension immediately transforms the appearance of Julia files, with syntax highlighting making code structure visible through color coding. Keywords like “using” appear in purple, while functions and methods display in yellow, providing visual cues that aid in code comprehension and error detection.

The necessity of installing language-specific extensions reflects VS Code’s philosophy of modularity. Rather than shipping with built-in support for every programming language, VS Code maintains a lean core and allows users to add only the functionality they need. This approach keeps the application responsive while providing flexibility to support hundreds of programming languages and frameworks. The same principle applies to Python, R, and other languages used in the course, each requiring its own extension for full functionality.

Workspace Configuration and File Management

The distinction between opening individual files and opening folders as workspaces represents a fundamental concept in VS Code usage. When users double-click on a file like examplejuliafile.jl, VS Code opens in single-file mode, providing editing capabilities but lacking the broader context of the project structure. This mode limits access to features like project-wide search, integrated terminal functionality, and Git integration that depend on understanding the project’s directory structure.

The preferred approach involves adding the entire project folder as a workspace. This can be accomplished through the File Explorer icon in VS Code’s sidebar, where right-clicking in empty space reveals the “Add Folder to Workspace” option. Alternatively, the File menu provides access to the same functionality. Once the cloned repository folder is added as a workspace, VS Code gains full awareness of the project structure, enabling features like IntelliSense code completion across files, project-wide refactoring capabilities, and integrated version control.

VS Code’s support for multiple workspaces within a single window represents a significant advantage over traditional IDEs like RStudio. Researchers often work on multiple related projects simultaneously, and the ability to have several project folders open in one VS Code instance facilitates cross-project comparisons, code reuse, and integrated workflows. Each workspace maintains its own settings, extensions configurations, and debugging profiles while sharing the common VS Code interface.

Running Julia Code and Understanding Compilation

The execution of Julia code within VS Code demonstrates the power of integrated development environments. Rather than requiring users to switch between an editor and a separate terminal window, VS Code provides integrated execution capabilities through its built-in terminal and the Julia extension’s REPL integration. When students opened examplejuliafile.jl and clicked the run button, VS Code automatically launched a Julia instance in the integrated terminal and executed the code.

The initial execution of Julia code involves a compilation phase that can cause confusion for new users. Julia’s just-in-time compilation approach means that functions are compiled to machine code the first time they are called, resulting in longer initial execution times followed by much faster subsequent runs. This compilation process, which converts human-readable Julia code to optimized machine instructions, explains why the first run of the DICE model took considerably longer than expected. The lack of progress indicators during this compilation phase represents a design oversight that can leave users uncertain whether their code is executing properly or has encountered an error.

The successful execution of the DICE model optimization produced a result showing 4,485 trillion units of utility, demonstrating the model’s ability to solve complex optimization problems. This computational capability, combined with Julia’s performance advantages over interpreted languages, makes it an ideal choice for the numerically intensive calculations required in integrated assessment modeling. Students who encountered errors related to missing packages like Mimi were directed to review previous materials on package installation, highlighting the importance of proper environment setup before attempting to run complex models.

Climate Economics and the Three Great Debates

The Climate Sensitivity Parameter

The relationship between changes in atmospheric carbon concentration and resulting temperature changes represents the first major debate in climate economics. This relationship, quantified through the climate sensitivity parameter, forms the foundation of all climate impact assessments. While the increase in atmospheric CO2 concentrations due to human industrial activity is well-documented and rarely disputed even by climate skeptics, the precise magnitude of temperature response to these concentration changes remains a subject of ongoing scientific investigation.

The climate sensitivity parameter essentially functions as a multiplier that translates changes in carbon concentration into temperature changes. The scientific consensus, as synthesized by Sherwood et al. in their comprehensive review, suggests a value slightly under 3 degrees Celsius for a doubling of atmospheric CO2 concentrations. This estimate comes with uncertainty bounds represented by a probability density function, reflecting the inherent complexity of the climate system and limitations in both observational data and model capabilities.

The course will provide students with hands-on experience in estimating climate sensitivity using the MAGICC (Model for the Assessment of Greenhouse Gas Induced Climate Change) model. MAGICC represents a reduced-complexity climate model that can run efficiently on standard laptops while maintaining scientific credibility. The model is widely used in integrated assessment modeling because it provides a consensus approach to climate simulation while remaining computationally tractable. The model takes emissions from economic models as inputs, calculates atmospheric concentrations, determines radiative forcing, and ultimately produces temperature change projections.

The MAGICC model’s workflow illustrates the causal chain from human activities to climate impacts. Emissions data feeds into calculations of atmospheric concentrations, accounting for carbon cycle dynamics and the residence time of different greenhouse gases. These concentrations determine radiative forcing, the change in energy balance at the top of the atmosphere. Finally, the climate system’s response to this forcing, mediated by feedbacks involving clouds, ice sheets, and ocean circulation, produces temperature changes. This entire complex process can be summarized in the single climate sensitivity parameter that economists use in integrated assessment models.

The Damage Function Debate

The second major debate concerns the relationship between temperature changes and economic damages, a relationship encoded in the damage function of integrated assessment models. This relationship proves even more challenging to establish than climate sensitivity because it involves complex interactions between physical climate changes and human economic systems. The pathways through which climate change affects economic output are numerous and varied, ranging from direct impacts like reduced agricultural productivity to indirect effects through health impacts, infrastructure damage, and forced migration.

Some damage pathways are relatively straightforward to quantify. Higher temperatures increase demand for air conditioning, representing a direct economic cost as resources are diverted from other productive uses to maintain comfort. Agricultural impacts can be estimated through crop yield models that relate temperature and precipitation to productivity. Labor productivity declines in extreme heat can be measured through observational studies showing reduced output in outdoor industries during heat waves. The human respiratory system’s reduced efficiency at high temperatures provides a physiological basis for expecting health-related economic impacts.

However, the exact functional form of the damage function remains highly contested. Different models use varying parameterizations, from simple quadratic functions to complex regional and sectoral specifications. Some models assume smooth, continuous damage functions, while others incorporate tipping points and discontinuities representing potential climate catastrophes. The choice of damage function specification can dramatically affect model outcomes and policy recommendations, making this an area of active research and debate.

The instructor noted that much of the work conducted in Minnesota using earth economy modeling adds crucial detail to damage function specification. By explicitly modeling ecosystem services and their contribution to economic welfare, these approaches capture impacts that traditional damage functions miss. The loss of pollination services, degradation of water purification capacity, and reduction in storm protection from coastal ecosystems all represent economic damages that require sophisticated modeling approaches to quantify accurately.

The Discount Rate Controversy

The third and perhaps most contentious debate in climate economics concerns the appropriate discount rate for evaluating future climate damages. This parameter determines how we weigh future benefits against present costs, a crucial consideration given that climate change mitigation requires immediate expenditures to prevent damages that will largely occur decades or centuries in the future. The choice of discount rate can change the optimal policy recommendation from aggressive immediate action to gradual future response.

The discount rate controversy has profound implications for estimating the social cost of carbon, the key metric linking climate science to economic policy. The social cost of carbon represents the economic damage caused by emitting one additional ton of CO2, expressed in present value terms. This metric requires integrating climate sensitivity, damage functions, and discounting over time horizons spanning centuries. The Biden administration’s choice of $51 per ton of CO2 has dramatic implications for cost-benefit analysis of federal projects, as this value must be counted as a cost for every ton of emissions produced.

Recent academic literature reveals the sensitivity of social cost of carbon estimates to discount rate assumptions. Rennert et al.’s Nature publication provides estimates ranging from $80 per ton with a 3% discount rate to $308 per ton with a 1.5% rate. The mathematical implications of very low discount rates create philosophical challenges. With a zero discount rate, implying equal weight for all future generations, the optimal amount of current consumption approaches zero as the infinite future dominates present considerations. This mathematical result, while logically consistent, conflicts with both observed behavior and ethical intuitions about intergenerational justice.

Weitzman’s “Dismal Theorem” explores these philosophical and mathematical challenges in depth, showing how uncertainty about the correct discount rate, combined with potentially catastrophic climate damages, can lead to infinite valuations of climate mitigation. These theoretical insights highlight the need for careful consideration of ethical assumptions underlying economic analysis of climate policy. The assignment will require students to explore these parameter choices directly, modifying the DICE model to understand how different assumptions about climate sensitivity, damage functions, and discount rates affect optimal climate policy.

Inclusive Wealth and Sustainable Development

Limitations of GDP-Focused Approaches

The transition to discussing inclusive wealth begins with a critique of GDP-centric approaches to sustainability assessment. The DICE model, despite its sophistication and influence, essentially optimizes GDP by determining optimal spending on mitigation to avoid future GDP losses. This focus on GDP as the primary metric of economic success ignores extensive literature demonstrating that GDP poorly captures human welfare. Even within standard economic theory, welfare is properly defined as equivalent variation, encompassing consumer and producer surplus, which differs substantially from gross domestic product.

The emphasis on GDP in integrated assessment models like DICE creates several problematic implications for sustainability analysis. GDP measures flows of economic activity rather than stocks of wealth or capital that generate future welfare. A country could maintain high GDP while depleting its natural capital, human capital, or produced capital stocks, pursuing an unsustainable development path that current metrics would not reveal. The aggregation inherent in GDP also obscures distributional concerns, treating a dollar of income to the wealthy as equivalent to a dollar for the poor.

The quotation from Theodore Roosevelt provides historical context for thinking beyond GDP: “The nation behaves well if it treats its natural resources as assets, which it must turn over to the next generation increased, and not impaired in value.” This perspective shifts focus from current production flows to the asset base that enables future production and welfare. This asset-based view of sustainability aligns with economic theory on intertemporal optimization and provides a more comprehensive framework for evaluating development paths.

Inclusive Wealth as a Comprehensive Metric

Inclusive wealth offers a theoretically grounded alternative to GDP for assessing sustainable development. The concept is defined as the aggregate value of all capital assets in an economy, measured at their shadow prices reflecting marginal contributions to current and future well-being. This comprehensive accounting includes produced capital (infrastructure, machinery, buildings), human capital (education, health, skills), natural capital (forests, minerals, ecosystem services), and potentially social capital (institutions, trust, networks).

The mathematical framework for inclusive wealth builds on standard production theory but expands the types of capital considered. The sustainability criterion becomes whether the total derivative of the value function, calculated across the vector of all capital assets, remains positive over time. This criterion ensures that the economy’s productive base, properly valued at shadow prices, is maintained or enhanced rather than depleted. The approach provides a rigorous foundation for sustainability assessment that aligns with economic optimization principles.

The advantages of inclusive wealth over GDP for sustainability assessment are numerous. By focusing on stocks rather than flows, inclusive wealth captures the depletion of natural resources that GDP ignores. The inclusion of multiple types of capital reveals trade-offs between different development strategies, such as whether converting forests to farmland increases or decreases total wealth when ecosystem services are properly valued. The forward-looking nature of asset valuation incorporates future impacts of current decisions, unlike backward-looking GDP measurements.

Implications for Environmental Policy

The inclusive wealth framework has profound implications for environmental and climate policy design. Rather than viewing environmental protection as a constraint on economic growth measured by GDP, the inclusive wealth approach recognizes natural capital as a productive asset contributing to long-term welfare. Policies that maintain or enhance natural capital stocks while developing human and produced capital represent genuine sustainable development, even if they reduce short-term GDP growth.

The application to climate policy is particularly relevant. Climate change threatens to depreciate multiple forms of capital simultaneously: natural capital through ecosystem degradation, produced capital through infrastructure damage, and human capital through health impacts and forced migration. Mitigation investments that prevent these capital losses contribute to inclusive wealth even if they reduce current consumption. This reframing provides a more complete picture of climate policy trade-offs than narrow cost-benefit analyses focused on GDP impacts.

The instructor’s preference for inclusive wealth as the optimization target for sustainability reflects its strong theoretical foundations and practical applicability. Unlike ad hoc sustainability indicators or dashboard approaches that lack aggregation methods, inclusive wealth provides a unified metric grounded in economic theory. The challenge lies in empirical implementation, particularly in valuing non-market capital stocks like ecosystem services and social capital. The earth economy modeling framework addresses these challenges by explicitly modeling and valuing ecosystem services within an inclusive wealth accounting system.

Conclusion and Next Steps

The lecture successfully integrated three major themes: practical development environment setup, theoretical understanding of climate economics debates, and the conceptual framework of inclusive wealth for sustainability assessment. Students now have functioning Julia development environments connected to GitHub repositories, enabling them to modify and experiment with integrated assessment models. The discussion of climate sensitivity, damage functions, and discount rates provides the theoretical background necessary to understand and critique existing climate economic models.

The introduction to inclusive wealth sets the stage for more sophisticated analyses that move beyond GDP-focused assessments. In the upcoming assignment, students will apply their technical skills and theoretical knowledge by modifying the DICE model parameters and analyzing how different assumptions affect optimal climate policy. This hands-on approach reinforces the course’s emphasis on developing practical research capabilities while maintaining rigorous theoretical foundations.

The next lecture will begin with a deeper exploration of inclusive wealth theory before moving to additional topics. Students should complete Assignment 3 once it becomes available, using their newly configured development environments to explore the parameter space of integrated assessment models. The combination of technical skills, theoretical understanding, and practical application positions students to contribute meaningfully to the evolving field of environmental and resource economics.

Transcript

First, let’s go over the agenda for today. I want to start with some logistical points about the midterm. I’ll return to that in a moment, but I’ll address it now since I’ve already mentioned it. The question is: should we have a midterm? In previous years, I included a midterm focused on mathematics and theory, but this iteration of the class is designed to give you the tools and skills to do research in this area. That approach isn’t as conducive to a traditional midterm.

Honestly, when I sat down to write the midterm, I realized I could make up some problems—like solving the Nordhaus DICE model—but that’s not the emphasis of this course. You’ll get plenty of that in other courses, especially in 8601, which is much more focused on theory. Here, we’re focused on application and learning how to do things yourself.

So, I want to take a vote. Only people in class can vote—sorry, online folks. Who really wants to have a midterm? If you raise your hand, you might make some enemies, but I’m kidding. I think we’re not going to have a midterm. I’ll send out an updated syllabus with new grade weighting. The problem sets will be the main time commitment, and you’ll see that once the first one goes live. The first two were just setup assignments and didn’t reflect the effort required for the rest.

Problem sets will increase in grading importance, as will the research-related products we produce. Is everyone okay with this? No objections to fewer tests? Good.

Just to reiterate, we decided not to have a midterm. Are you all okay with that? Great.

For the rest of the agenda: I want to do a quick return to VS Code. We started installing the dev stack, but we haven’t really opened VS Code yet, and I want to show you how useful it is. That’s one of the two things I decided were necessary before releasing Assignment 3—a little hands-on time with VS Code and the Earth Economy DevStack.

Then I want to return briefly to the topic of climate economics and discuss the three major debates present there, since this will be necessary for Assignments 2 and 3.

That should take about half the class, and then we’ll switch gears to inclusive wealth and the reading for today. We’ll talk about what inclusive wealth is and discuss how to measure it.

Let’s get started.

First, what is the Earth Economy DevStack? I’ll stop sharing my PowerPoint screen so we can go live for this part. I need to reshare in Zoom—let me know if you can’t see anything, but I think I’ve got it set up.

The Earth Economy DevStack is a playground for linking models, developed out of the NATCAP teams here in our department. To show you, here’s our course website, but if you go to my actual website, you’ll see I’ve kept them separate and given them different themes to avoid confusion. You’ll notice a lot more content there.

One of the sections is software, and in that section, you’ll find a link for the Earth Economy DevStack. And this is what the dev stack is. You’ve probably already been introduced to many of these components. We’ve discussed INVEST, which focuses on ecosystem services, and GTAP, which is a computable general equilibrium (CGE) model. There are additional tools necessary to integrate these systems. For example, we have a land use change model called SEALS, and GTAPI, which automates CGE analysis. We won’t cover these in detail for a while, but essentially, the dev stack is a software platform that connects various tools, enabling the linkage of GTAP and INVEST, as well as other models. We’ll be leveraging this integration throughout the course.

At the end of the last class, I asked you to make sure you had cloned the dev stack using Git. I emphasized the importance of the command line—it’s essential, and I use it daily. However, most of my work is actually done in VS Code, which is another way to interact with Git. That’s what we’ll focus on today. Last class, we used the git clone command to copy the dev stack into your files directory. Now, we’re going to do more, specifically cloning the course directory, but this time using VS Code.

First, launch VS Code if you haven’t already. VS Code is an integrated development environment (IDE), essentially a text editor with extensive functionality that brings together many tools. Who here has used VS Code before this class? Who has used RStudio? RStudio is also an IDE, combining the ability to run R commands with visualization tools like variable lists and script editors. VS Code is an IDE as well, but it works for all programming languages.

We’ve debated internally whether to switch the R course from RStudio to Visual Studio Code. The community is moving in that direction, and VS Code is a more powerful tool than RStudio. While I love RStudio, it’s focused solely on R and doesn’t support other languages or robust GitHub integration. We decided to keep RStudio for the R course since most people are familiar with it, but this course will introduce you to more advanced tools.

Personally, I do a lot of R work, but I haven’t installed RStudio on my current machine in two years. If you like VS Code, it’s a great replacement for RStudio.

VS Code can do more than just programming. For example, you could use it to write an entire dissertation. Who here knows LaTeX? Most of you do. My admission as an economist is that I don’t know LaTeX. I’ve never run Overleaf or similar tools, but I’ve created many PDFs rendered by LaTeX using VS Code and Quarto Markdown. I prefer this approach and have never needed to use Overleaf. So, while I don’t know LaTeX directly, I can still produce LaTeX documents.

VS Code is highly customizable, and you can add extensions to enhance its functionality. We’ll frequently use the command palette, which is a key way to interact with GitHub. If you press CTRL-Shift-P on Windows (or Command-Shift-P on Mac), it brings up the command palette. This is a useful way to access many commands in VS Code. For example, if you want to switch between light and dark themes, you can use the command palette to toggle them.

Who prefers light theme? Who prefers dark theme? It seems dark theme is more common, but both are available.

We’ll use the command palette as our main interface with GitHub. You can do everything with GitHub using the command line, but you may find the command palette more convenient. To connect VS Code to your GitHub account, look for the GitHub icon in the lower left of the VS Code window. Click it, and you’ll be prompted to sign in and synchronize your settings across devices. This allows you to have the same setup on different machines, your phone, or in the browser.

Once you’ve signed in, a browser window will open for authentication. Select GitHub, follow the prompts, and your VS Code will be linked to GitHub. You can now push and pull code directly within VS Code.

Does everyone have VS Code linked to their GitHub account? If you’re unsure, you can check by opening the command palette (Ctrl-Shift-P), searching for “git clone,” and seeing if you have the option to clone from GitHub. If you do, you’re connected.

Now, let’s talk about file structure. In your users directory, create a folder called APEC8602. If you’ve already named it differently, please rename it to avoid typos in future code. Inside your files directory, you should have APEC8602, which should be empty for now. The GitHub repository is not the outermost APEC8602 folder; instead, you’ll have a subfolder named APEC 8602 2025, which is what we’ll clone.

To clone the repository, remember the location where you want to save it. In VS Code, open the command palette (Ctrl-Shift-P), search for “clone,” and select “Git Clone.” You can copy and paste the course repository URL, but I’ll show you the interface method. Since we’ve synced our account to GitHub, you can choose “clone from GitHub” and search for the repository. For example, you might see “jandrewjohnson/APEC 8602-2025.” You can search for it or enter the URL directly.

Once you select the repository, VS Code will ask where you’d like to save it. Make sure to choose the correct location—inside your APEC8602 folder. When prompted, confirm the location and proceed with the clone.

Let me pause here. Has everyone reached the point where VS Code is asking where to put the repository? Please nod or indicate so we can troubleshoot if needed.

If you’re not connected to GitHub in VS Code, let’s troubleshoot. One way is to open the command palette and search for “GitHub sign in.” Follow the prompts to complete the connection. Strictly speaking, try closing this and show me how you look at it. Everyone’s opening and distributing—excellent. With good trust, normally the only trigger is when we download something, so I don’t think you trusted it. Now, try a different behavior. It’s now an opportunity to walk everyone through it. Okay, we got it.

I swear computer permissions issues are the thing I hate the most. A lot of bugs come up simply because it was in restricted mode in this case. At some point, I should tell you the story of the six-month fight I had with OIT here. I bought myself a supercomputer with 128 cores and a terabyte of memory, and they wanted to have it locked down and not give me administrator rights. That’s like giving someone the Sistine Chapel but telling them they can’t go in it. It took me six months to convince them to let me wipe the machine. I had to threaten to do it myself if they wouldn’t do it for me. They were worried about security, and my answer was: assume I’m a security risk, treat me like a knowledgeless student who just needs Wi-Fi. That’s all I needed—I just wanted my machine back. Anyway, that’s my rant. I understand the purpose of security settings, but they are a pain.

Now we’ve all logged in, so let’s go back to the actual clone. You did git clone. Hopefully, you haven’t set where you want the repository to go, because I want to talk through that file structure. However you prefer to navigate, go to your users, files, select 8602—it should be empty if you haven’t—and then select that as the repository. Don’t select anything else; make sure it’s 8602 that you select, and do that.

When you do that, you will have a new directory downloaded into your 8602, and you should have APEC 8602 2025, which is the actual name of the repository. In there, you should see a bunch of files that were downloaded, like examplejulifile.jl. Did everybody get to that point and have it organized in this way?

Yes. Whenever I do this, I always see the huge list of trusted authors. Is there any time where I should press “yes”? It’s really unlikely. In principle, someone could have code on your machine that would delete all your files if you just ran that file, and that would be bad. I guess that’s a risk we take. If you trust me, yes—you’ll have a hard time completing this course if you don’t. I don’t have a better answer. All I will say is I’ve never not trusted something, because I follow basic good practices. I haven’t gotten a virus in decades because I don’t do risky things. All the instincts you have for not going to suspicious sites or falling for phishing scams—just be a good internet user and you’ll be fine.

The second question: let’s say you update your GitHub. Should you re-clone again? We’ll talk about that. It’s called git pull and git push, but we’ll come to that in a second. Cloning just creates a brand new copy if there’s nothing there. What happens if you clone a repository, then someone like me makes a change? This is exactly what Git is designed to handle well. The workflow is: when I log on to any project on Git, I have my VS Code configured to automatically do a git pull. Many people like to do that manually to keep control, but when you do a git pull, either it finds no changes and says everything is up to date, or it finds new files. Then it checks for conflicts. If there’s no conflict, it downloads and adds them to your repository. The challenging point is if you edit a file and I also edit a file—it then goes into a merge situation, and Git has robust tools for dealing with that. We’ll cover that in more detail in a subsequent lecture.

Any other questions?

Last question: do I clone to my own repo or to yours? Cloning refers to getting the files onto your computer. It doesn’t necessarily mean putting it on the online GitHub hosting. So you are not actually doing either; you are just cloning directly from my website to your computer, but it hasn’t saved it as a repository on your user account either. There are all sorts of ways you can do it. Another common workflow is what we did for the Earth Economy dev stack—I showed you how to fork it. A fork refers to when you take someone else’s online repo and make a copy of it on your personal GitHub account. When you do a clone, you could clone your own. If you do fork, then when you clone, you clone your fork of the original. Here, we took a shortcut—we just cloned directly from mine, and it’s not yet on your user account. But you could do it in reverse order: clone it onto your local machine, then publish it, and that will create a new repository on your user account.

This way is possible.

So, what we’re going to do is—I should have been more clear—I actually want you to clone it again, not from the fork you created. Apologies for that. Let’s talk this through carefully. On GitHub, here’s my repository—my version. If you go to this one, it’s public, so you can access it. I thought I only had you fork the dev stack, but regardless, we’re going to create a direct clone of this one, not of your user account. In particular, copy this URL. When you fork it, it creates a new version with your username. Just so we all have the right files today, I want you to delete the one you just cloned if it was from a fork, or even if it wasn’t, let’s do it again for practice. This time, point directly to this URL. For that, copy it.

In VS Code, after you select clone, instead of using the interface—which probably populated with your version—just directly provide the repository URL: github.com/myname/it.

Then, when you do that, save it in the same location. To verify you’ve done it correctly, go in and make sure you see all these files.

In the command palette, first search for “clone.” Once you have “clone,” it asks for the repository URL. You might need to delete a folder. You might also be in restricted mode. Let me take a look.

How’s it going over here? You got it all? Good, thank you.

When you get an entry-level estimate, let me see what’s needed. Should be an old version of it. What we need to do is delete your repository that you currently have cloned, and re-clone the one from my URL.

The Explorer might break because that’s the user interface, rather than just computation. You could get it to work, but that would be important.

I’m going to make sure everybody’s on the same page here. Let me make a point on that for the recording.

We’ve diagnosed an interesting problem: instead of hitting the shortcut Ctrl-Shift-P to open up the command palette, some people were clicking a tempting box that looks like a search window. When you click on that, it appears similar to the command palette, but it’s actually just the search window. Incidentally, you can type the caret to the right, and that converts it to the command palette. That would be the same as hitting Ctrl-Shift-P, which pops up the command palette and pre-populates it with the caret. But if you just click on the search box, it won’t default to the command palette.

I use both frequently. I rarely use my mouse to navigate to a different file; I just hit CTRL-P to bring up the file search. So, to switch to a different file, I’ll usually use CTRL-P. Both are useful, but it can be confusing. I’m glad we diagnosed it and got it sorted.

This is how the class is supposed to go—a lot of me walking around, making sure the software is working. In my experience, this is the fastest way to get people up to speed with a powerful set of tools.

Now we have VS Code set up, we’ve cloned the course repository, and you have a number of things I want to quickly talk through.

As an absolute fail-safe, there is also a “Download ZIP file” option in the green GitHub code button. Most people want to interact with Git this way, since it feels like being in charge of your own downloads. The problem is that it breaks your linkage to GitHub, so you won’t get updates and will always have an out-of-date version. We’re not going to use that.

Let’s skip that and go straight to configuring VS Code for Julia. In VS Code, you’ll find an extensions icon on the left toolbar. Click that and search for “Julia.” VS Code has tons of useful extensions, and there’s an officially supported Julia one. Some people wonder why VS Code doesn’t already have Julia built in, but you have to get the extensions because it works for hundreds of programming languages. The same applies if you want to use Python—you need the Python extension.

Once you add the Julia extension, the example juliafile.jl will suddenly be colorful. This means VS Code now understands the Julia language and can identify typos or mistakes. For example, “using” is purple because it’s a Julia command, and “git model” is yellow because it’s an object method or function.

A few notes that might be helpful: Julia is the first one that comes up, and “Julia Language” is the official provider. One thing I’ve seen is confusion about how to get the right file open. There are two ways: you can open a specific file, or you can open a folder, which is called a workspace. If you just double-click on examplefile.jl, it opens in single-file mode. Instead, you should add the folder as a workspace.

To do that, go to the File Explorer icon. Once you’re in the Explorer, right-click in some open space and select “Add Folder to Workspace.” You could also go to File > Add Folder. Select the repo you just cloned. Many people already did this because VS Code often asks if you want to add the folder after cloning, but if you didn’t, this is how you do it manually.

Now you’ll have the folder added to your workspace. You can have multiple folders and workspaces, which is a powerful feature compared to RStudio, where you typically have only one workspace. Here, you can manage multiple projects. Once you have that, you’re in folder mode, not file mode. When you click on a file, you have the context of all the other files around it.

I’m not going to get into other optional extensions, but there are many. The two I use most are Quarto (for publishing the website) and GitGraph. In your course repository, there’s a folder called “src,” and inside you’ll see folders for assignments, lectures, and slides. If you go to “lectures,” you’ll see QMD files—Quarto Markdown. Many of you know Markdown or R Markdown; Quarto Markdown is similar but adds extra support. Unless you’re doing advanced work, it’s essentially identical.

This is the website in its rawest form. My belief is that if you can’t express something with 8 bits of information—letters—it’s not open source. If you can’t understand the source, it’s not open source. Here, you have the source to our website. The only thing you can’t do is push it to my URL, but you could host it yourself. This is the literal file used to render our website. For example, if you go to “Assignments,” you can see what I was working on for Assignment 3 before deciding we needed another day of Julia content. You can see exactly what I had written up to that point. That’s the downside of showing you the full open source—you can see my mistakes.

I also gave you a version of the rendered website. It has the same structure, but Quarto takes a QMD file and, when you run “quarto render” on the command line, creates HTML files. If you go to your file explorer and look at the rendered website, you’ll see it’s not coming from the internet but from your local repository. You literally have a copy of the whole website.

This is me showing off how cool Git is for open source development. The other extension I’ll mention briefly is GitGraph. I have many repositories, and it becomes powerful but confusing when many people edit at once. GitGraph shows you the tree of all the different contributors. For example, I’m the only contributor to the website, so it’s just a series of changes, but I could revert the website to any previous state by going back to an earlier commit. GitGraph is a useful way of visualizing this.

Now let’s talk about Julia code. The last thing we’ll do before getting into today’s lecture is confirm that everything is set up correctly. To do that, go to examplejuliafile.jl. This should look familiar—it’s exactly what we entered on the command prompt last class, but now it’s in a JL file. When we run this, we could use the command prompt by typing “julia examplefile.jl,” but we’re in an IDE, so let’s use its power. Hit the run button; as long as this file is open, it will launch Julia in a terminal and run the code.

This code takes a little while to calculate because it creates the model and runs it. You won’t see anything until it finishes. Did anybody get an error message?

This is somewhat annoying code because there’s no indicator that it’s working, which is bad design. But you can see now mine has succeeded—we’ve solved the DICE model again, and we can see the total utility maximization. There are 4,485 trillion units of utility in this optimization. That’s a lot of utility. I was just stalling to make sure everyone’s computer caught up. Did everyone’s Julia file run successfully?

A couple of gotchas: if you’ve never run Julia before, it pre-compiles everything, which takes a long time. That was just converting the .jl files to zeros and ones—compiling—so the code runs much faster. That’s why Julia is so fast.

Some people got errors like “cannot run using Mimi” or “can’t find Mimi.” That means you haven’t installed Mimi yet. You can try now, but it might take a while. Refer back to my video lecture for the steps to add Mimi as a package.

Now we’re at a good point: if I give you an assignment to change the climate sensitivity parameter used by Nordhaus and rerun the model, this is the file you’d change. Maybe create a copy, but I’ll give you a hint—there will be some version of “m.update_parameter” or similar. You now have a scratch pad that runs, so you can make changes, run it again, and use the Explorer to see the results.

After class, I’ll finish editing the assignment and push it out, but you’ll have plenty of time to do it.

Any questions?

Yes, you can do it however you want. That’s probably best. At the end of the day, the assignment will ask for a PDF outputting the results I ask for. So it doesn’t matter how you do it, as long as you get that PDF.

The sensitivity factor would be changing across the main events for 2016. In the M model, like MimiDICE, we used “get_model,” which returns the m object. Don’t worry about that now; I’ll point you to the source file and the Mimi documentation, which has a good tutorial for that notation.

Any last questions before we switch to debates?

Let’s see what happens if I go max… Hmm, that’s always a problem. No big deal.

Oh, we had a bunch of chats I’ve been missing. Good, you found the slides. I forgot to push the new slides to the course repo, so hopefully you just went to the Google Drive. Good.

Shifting gears, let’s talk about the last thing in the climate IAM world that’s necessary to understand for this work. There are three big debates. I indicated a few, but let’s add more detail.

Number one: what is the relationship between a change in carbon and a change in temperature? There’s no doubt our industry produces carbon, and CO2 levels are rising. Even the hardest-core climate skeptic won’t deny that, because it’s easily validated.

The more uncertain question is the relationship between a change in atmospheric carbon concentration and actual temperature. This coefficient—the climate sensitivity—multiplied by the change in carbon gives the change in temperature.

The second debate is the relationship between change in temperature and change in economic damages. That’s even harder to establish. There are many ways it could be related: hot days mean more money spent on air conditioning, which is a damage because it’s money not spent on other productive things. Or it could be more direct, like it’s harder to work in a field when it’s hot. There’s empirical evidence that our respiratory system doesn’t work as well in heat. The exact shape of the damage function is debated, and many models use different parameterizations.

The final debate is how much we care about the future. Our choice of discount rate determines this, and it matters for climate change because most benefits of reducing emissions are felt by future generations, while costs are immediate. How you weigh future benefits against current costs is critical.

To summarize, Sherwood et al. provided a recent review of the climate sensitivity parameter, summarizing many estimates and creating a best fit. The takeaway is it’s a little under 3 degrees C for a doubling of CO2. The probability density function for that S is shown in their paper.

You might wonder where this comes from. In this class, we’ll learn our last tool—Python—in a subsequent lecture. Once we have Python, we’ll run a model and reconstruct this curve ourselves, calculating climate sensitivity directly. We’ll use the MAGIC model, a reduced-complexity climate model that can run on your laptop. It’s widely used in integrated assessment modeling because it’s consensus and easy to run. It takes emissions from the economic model, calculates concentrations, radiative forcing, and finally the climate response (delta temperature). That result is then used in the damage functions for economic impact.

If you want to go in depth, one of my favorite figures from the MAGIC paper shows temperature history in 3D, with different trajectories under different SSPs. The bars illustrate mitigation—how effective we are at reducing emissions bends the curve down. It’s a complex but informative figure.

Ultimately, all this can be summarized into the single parameter we care about as economists—the climate sensitivity parameter. The results of the MAGIC model under many parameterizations help estimate that figure.

That transitions into the damage function. Economic models sometimes run the climate model to directly calculate the change in climate damage from a change in emissions. Some just take scenarios, but either way, you go from a change in CO2 to a change in temperature, and then to a change in GDP or economic damages.

What are the impacts of climate change? When I was 10 and first learned about climate change, sea level rise sounded like science fiction. But it’s real. The best proof is that 20,000 years ago, Indonesia wasn’t an island—you could walk to Australia. The existing coastlines were very different, and the ice sheets extended much farther. Humans likely made it to North America by walking across land bridges. Sea level rise drastically changes things, and if you built a skyscraper or condo in those areas, that’s economic damage.

We’ll dive deeper into the damage function in subsequent lectures. Much of the work we’ve done in Minnesota with earth economy modeling adds detail there, showing that ecosystem services also matter.

The discount rate is very controversial because it leads to very different estimates of damages, expressed as the social cost of carbon. This is a key term connecting climate economics with policy—it identifies how many additional dollars of damages to the economy come from one more ton of CO2 emissions. It requires modeling the change from carbon to temperature, concentrations, and damages.

The main policy contribution of the DICE model is estimating the social cost of carbon. In Julia, you can see this: it runs a simulation without climate change and one with emissions. By optimizing GDP, you can identify the marginal damages from the next ton of carbon in the optimal state. You can calculate this for the baseline (no mitigation) or under a more realistic policy. DICE gives us the social cost of carbon. Run it multiple times and look at the change in GDP from an impulse of extra carbon.

This is important because, for example, the Biden administration chose $51 per ton of CO2, which has dramatic implications for cost-benefit analysis. If you consider $51 per ton as a cost for every ton of coal burned, it quickly changes which projects are worth funding.

We’re in a chaotic era of climate policy, so I won’t dive into current politics, but even in the academic literature, there are differing views on the social cost of carbon. One study in Nature by Rennert et al. provides good estimates for different discount rates. With a 2% discount rate, they get $185 per ton. The social cost of carbon is heavily determined by the discount rate.

This illustration from Rennert et al. shows different colors for different near-term discount rates, all much lower than Nordhaus’s rate. With a 3% discount rate, the central estimate is $80; with 1.5%, it’s $308. With a 0% discount rate, it’s much higher. If you cared about the future as much as the present, mathematically, the optimal amount to spend on yourself is zero, because utility is divided by an infinite sum. This is a mathematical aside—nobody really thinks about it that way, but you need at least a tiny discount rate to avoid strange results. Interesting articles like Weitzman’s “Dismal Theorem” discuss these issues.

The assignment will have you play around with the DICE model and try different values for these parameters, then make conclusions based on the results. Any questions before we transition to inclusive wealth in the last few minutes?

This year, I spent much more time on climate IAMs, and I’m happy I did. Most people think environmental economists work on climate IAMs. If you’re not doing causal inference or similar approaches, you’re likely in the climate IAM space.

Transitioning to the article we read about inclusive wealth, I like this paper because it answers in a sophisticated and compelling way what we should optimize. DICE essentially optimizes GDP—maximizing GDP by spending on mitigation to avoid future losses. But what if we don’t care about GDP? There’s a huge literature showing GDP is not a great metric of welfare. Even as economists, we define welfare as equivalent variation—the area under the curves, consumer surplus plus producer surplus—which is not the same as GDP. In the DICE model, the focus on GDP is not ideal. To be fair, it does include welfare, but there’s undue emphasis on GDP.

I also like this quotation: “The nation behaves well if it treats its natural resources as assets, which it must turn over to the next generation increased, and not impaired in value.” This is a nuanced view of sustainability. We need to think not about GDP in a given year, but about the assets we have that we might use to generate future value.

The segue from climate models is that DICE is great for estimating cost-benefits of mitigation, but it’s limited: a single aggregate commodity, a single link between the economy and Earth’s systems, and a simplified representation of nature. The biggest limitation is a single aggregate damage function expressing everything as a change in total output. In reality, there are multiple goods and multiple links between the economy and nature. Using DICE or similar models for ethics, decision-making, or policy support can be problematic.

What does DICE say about sustainable development? When you run it, there’s a tendency to say that’s the optimal sustainable development. But as economists, we know it depends on assumptions—one region, one sector. Can we say anything useful about which countries or industries are winners or losers? No, because there’s only one region and one sector.

Nordhaus himself created the RICE model, which is DICE with multiple regions. It’s challenging to solve models with many regions, which is why you don’t see them scaled up to high-resolution data. I’m oversimplifying the DICE model, but I focus on it because it’s central in political debates. People who use DICE results showing a low social cost of carbon often ignore the RICE model.

The deeper question is: what is the best approach for sustainability? Economics can provide a good definition of sustainability. We’ll spend the first 10 minutes of next lecture on this, but to preview: using economic theory, inclusive wealth is a good metric. It’s my personal choice for what we should optimize for sustainability.

Inclusive wealth is defined as the aggregate value of all capital assets, measured in their current net present value, where the value of a unit of capital is measured by its contribution to current and future well-being, aggregated over all people.

Let me show you the last thing. What is inclusive wealth? This is what we’ll focus on next lecture. We’ll take our standard framing of production, include more types of wealth—hence “inclusive” wealth—and add more types of capital. Using that equation, we define inclusive wealth as the change in the total derivative of the value function, taking a vector of capital assets, which should be greater than zero when calculated over multiple periods.

With that, I think we’re good for today. Any questions?

Keep an eye out for Assignment 3, which will go live as soon as I finish it—possibly on the plane, since I’m going to Chicago soon. No, I have to teach another class first. Anyways, bye!