Lecture 09 - Introduction to Computable General Equilibirium

Reading: Shoven and Whalley 1984

Slides as Powerpoint: Download here

Video link: On Youtube

Content

Introduction to Assignment 4 and AI in Economics

Today’s lecture focuses on Computable General Equilibrium modeling, but begins with important practical matters regarding the newest assignment and broader reflections on how artificial intelligence is transforming the economics profession. Assignment 4 has just been posted and centers on INVEST, but with a significant twist: students will be coding INVEST in Python, representing an entirely new programming language for many in the class. This raises an important question about how we learn programming languages so quickly in the modern era, which leads to a necessary discussion about both the opportunities and existential threats that AI presents to the economics profession.

The structure of today’s class allows time at the end for deeper questions about Assignment 3, while the main focus remains on introducing CGE modeling. The discussion will cover several key topics: the seminal 1984 work by Chauvin and Wally, earlier developments in the field, key equations including CES and CET functions, and the practical implementation of these concepts in GTAP.

Setting Up the Development Environment

Before diving into the theoretical content, proper workspace setup is essential. Students should load the Earth Economy DevStack code workspace, being careful not to click the VS Code application icon directly, as this can cause confusion. Instead, the correct approach is to launch VS Code by opening the eartheconomydevstack.code-workspace file, which should have been cloned into the user’s file directory. This method opens VS Code in a special mode with multiple folders loaded simultaneously.

The workspace should include multiple folders with Git and GitHub integration built into VS Code. At minimum, students need both the Earth Economy dev stack and the class repository loaded. If the class repository does not appear in the VS Code Explorer tab, students can right-click in that area and add the folder to the workspace. Saving the workspace at this point prevents the need to repeat this setup process each time, as it updates the code workspace file with the current configuration.

The New Era of AI-Assisted Programming

Assignment 4 presents an interesting pedagogical challenge: it requires knowledge of Python syntax, yet Python has not been formally covered in the course. This apparent gap reflects the transformative era we now inhabit with AI tools. The instructor’s experience teaching Undergraduate Introduction to Microeconomics 1101, which fulfills a writing requirement, provides a parallel example. The entire framework for teaching writing had to be reconsidered because students using AI with the exact prompts given to previous classes can now produce nearly flawless essays. This has forced a fundamental rethinking of what it means to teach writing in an era where a single prompt can generate a complete, polished essay in seconds.

The university community continues to debate these issues, engaging in significant institutional introspection. The current approach focuses more heavily on discussion and ensuring students can generate their own answers, rather than simply producing written essays. In a similar fashion, software AI tools can generate very good code extremely quickly. This capability is simultaneously powerful and dangerous, though not in the science fiction sense of a technological singularity. The real danger lies in the potential for students to use AI exclusively, finding themselves in situations where their code works but they have no understanding of why. This creates a tension similar to short-term versus long-term investing strategies.

In the companion course 8222, which many students are taking concurrently, the focus will be much more on understanding why software works the way it does, with extensive hands-on, traditional coding exercises. For the present course, however, the approach embraces what might be called the “dark arts” of programming.

Vibe Coding and Its Implications

The concept of “vibe coding” represents a new paradigm in software development. Vibe coding refers to using AI tools to generate code without fully understanding the underlying logic or implementation. As a demonstration, the instructor created an agent-based simulation in approximately three minutes using AI tools. This agent-based model serves as a replacement for a CGE, complete with sophisticated graphics and user controls. The instructor specified only the frameworks and libraries to be used, and Claude generated the rest of the code. While the instructor understands the resulting code, it is easy to see how someone could create impressive, functional software without comprehending the underlying implementation details. This represents the essence of vibe coding: it is extremely powerful, but the significant downside is that users might not understand why their code works, and if it stops working, they may lack the troubleshooting skills to fix it.

Despite these concerns, the course fully embraces this approach. Students can use AI as much as they want, with the syllabus explicitly allowing AI use with or without attribution, though students remain fully responsible for any mistakes in their work. Recent discourse highlights the concept of “comprehension debt,” described as the ticking time bomb of large language model-generated code. This parallels the well-known concept of “technical debt,” where taking shortcuts in code creates accumulated problems that must eventually be addressed. Comprehension debt refers specifically to the growing body of LLM-generated code that programmers do not fully understand, creating maintenance challenges that compound over time.

Research on AI’s Impact on the Programming Profession

A recent preprint from Harvard titled “Generative AI as Seniority-Biased Technical Change” examines how generative AI might bias technological change, particularly affecting hiring patterns in the software industry. The researchers found that after the release of ChatGPT 3.5, junior employment in firms adopting AI declined sharply relative to firms that did not adopt these tools, while senior employment continued to rise. This pattern suggests that the value of being an experienced programmer—someone who deeply understands why code works the way it does—is actually increasing in the AI era. Simultaneously, the value of low-level coding skills is decreasing, as AI can effectively replace many routine coding tasks but cannot replace high-level reasoning, system design, and deep technical understanding.

Practical Application: Using AI for Assignment 4

The instructor demonstrates how to approach Assignment 4 using AI tools in a practical, iterative manner. Much of the assignment involves running the INVEST model, which students have done before. Rather than dedicating an entire lecture to technical details like using GDAL to open a raster file or writing loops to call INVEST multiple times, the recommended approach involves working iteratively with AI tools.

When INVEST was run previously, the model could save its configuration as a Python file after solving. This file exists in the class repository, generated by INVEST version 3.13, though students’ files may look slightly different depending on their version. The recommended workflow for the assignment involves copying both the generated code and the assignment question, then pasting both into ChatGPT or another large language model to provide full context. Including both the question and the starter code is crucial because the model may not know the specific details of INVEST’s implementation otherwise.

With this complete context, the LLM can generate code that modifies the arguments for different scenarios. For example, the assignment might require increasing carbon storage in residential land use classes by 50%, 100%, 150%, or 200%. The LLM can identify the relevant columns in the biophysical table, create a dictionary structure for the different scenarios, modify the arguments dictionary accordingly, and write code to rerun the model for each scenario. This represents good, professional programming practice, but it also means students receive the answer directly from the AI. The hope is that students will still develop intuition from the remainder of the assignment, which asks them to write analytical content about their results.

The Broader Context of AI in Professional Programming

Professional software programmers are increasingly embracing these AI tools, and it has become important for students to understand both how to use them effectively and how to build their own intuition alongside tool usage. Some universities initially banned LLMs entirely, treating their use as plagiarism, but enforcement has proven extremely difficult. Moreover, the distinction between AI-generated and human-generated writing has become essentially impossible to detect reliably. Both writing and coding are undergoing fundamental transformations, and the most effective strategy involves understanding both sides of this equation: learning to use AI tools effectively while also working diligently to build personal understanding and expertise.

Transition to Computable General Equilibrium Models

Moving beyond the discussion of AI and programming, the lecture now turns to the main topic: computable general equilibrium models. Before diving into the technical details, there is a brief digression to acknowledge a remarkable video showing the Montiac, a physical economic model that uses actual pumps and reservoirs to represent economic flows. In this mechanical computer, flows of water represent economic transactions such as consumption and savings, while the height of water in each reservoir indicates the accumulation of capital or other economic stocks. This tangible representation illustrates essentially what modern CGE modeling accomplishes: building representations of economic systems. Fortunately, contemporary economists can use mathematics instead of plumbing to achieve this goal.

The course has covered earth economy modeling and the relationship between GTAP and INVEST, working toward linking these different modeling frameworks. Considerable time has been spent running INVEST, but comparatively little attention has been given to the CGE side of the equation, so the remainder of today’s lecture focuses on that component.

CGE Fundamentals: The Circular Flow Diagram

The discussion of the CGE component begins from first principles with a concept that every economics student has encountered: the circular flow diagram from introductory economics. While this is a foundational concept that students typically learn early in their economics education, courses often move quickly to equations and calculus, leaving the intuitive visual representation behind. The basic idea captured by the circular flow diagram is that economic agents—specifically households and firms—are fundamentally linked through markets. Prices emerge from interactions in these markets through the forces of supply and demand. Equilibrium is defined as the particular set of prices that clears all markets simultaneously, meaning supply equals demand in every market at those prices.

Beyond the financial flows captured in standard representations, there are also physical flows of goods and services moving through the economy. For simplicity, the focus here remains on financial flows, which is also the approach taken by the GTAP model. The simple circular flow diagram can be expanded and elaborated to represent the more complex GTAP CGE economy. This expanded version includes more boxes representing different economic actors, but it retains all the key conceptual elements: producers or firms, private households, and an additional construct called the regional household.

The Regional Household and Macroeconomic Structure

The regional household represents a modeling convenience rather than a literal economic actor. Its function is to allocate income among three competing uses: private expenditure, savings, and government spending. This construct is calibrated separately for each country in the model, reflecting real-world differences in savings rates and tax structures across nations. The regional household serves as a connection point with systems of national accounts, aligning with standard macroeconomic identities such as the familiar equation C + I + G + X – M, where consumption, investment, government spending, exports, and imports sum to total economic output.

Producer Structure and Sectoral Detail

The main computational complexity relevant to policy analysis occurs on the producer side of the model. CGEs in general, and GTAP specifically, include numerous sectors, with the current version of GTAP featuring 65 distinct sectors. Each of these sectors has its own production function that describes how it combines inputs to create outputs. The production function for each sector combines two broad categories of inputs: endowments and intermediates. Endowments include factors such as capital, labor, and natural resources. Intermediates represent inputs purchased from other sectors in the economy. Most real-world firms use both types of inputs, and in many cases, intermediate inputs dominate the cost structure.

In CGE terminology, “value added” is effectively synonymous with endowments—it represents the contribution of primary factors to production. GTAP uses 65 sectors because this represents the level of detail that can be supported by available global data. Each sector’s production function combines value-added goods and intermediate inputs, with firms optimizing their input use based on relative prices. Inputs can be sourced domestically or imported from other countries, and the model carefully accounts for differences in prices, tariffs, and product differentiation. For example, the model recognizes that champagne produced in France differs from sparkling wine produced elsewhere, even though both might fall under similar product categories.

GTAP models multiple countries simultaneously—currently 160 countries or regions—with each country having its own complete circular flow of income and expenditure. These national economies are linked through international trade flows, creating a truly global model.

Income Allocation and Production Optimization

The regional household in each country collects all income generated within that economy and allocates it among private households, savings, and government expenditure. This allocation typically uses a Cobb-Douglas functional form, which provides flexibility while maintaining tractability. This structure enables policy analysis examining questions such as the economic effects of changing national savings rates or altering the balance between private and public spending.

On the producer side, GTAP includes detailed modeling for all 65 sectors, with each sector combining endowments and intermediate inputs according to its particular production technology. Inputs can be sourced domestically or imported, and the model tracks these flows across all countries in the system. This level of complexity allows for realistic modeling of global trade patterns and production networks, capturing important features such as global supply chains and the international division of labor.

Applied Earth Economy Modeling

Applied earth economy modeling represents an extension of standard CGE analysis that explicitly connects earth systems—including natural capital, ecosystems, climate, water, and soil—to the economy through the concept of ecosystem services. Some ecosystem services directly generate utility for consumers, such as recreational opportunities in natural areas. Other services affect production processes, such as pollination services that increase agricultural yields. Marketed ecosystem services, like timber harvested from forests or oil extracted from reserves, have observable market prices. Non-marketed services, like wild pollination or water filtration by wetlands, lack market prices but nonetheless significantly impact economic activity. The goal of earth economy modeling is to include these linkages explicitly in economic analysis, thereby avoiding the systematic undervaluation of non-marketed ecosystem services that occurs when they are omitted from models.

Clarifying Ecosystem Services Terminology

Natural resources within this framework are considered ecosystem services, specifically representing the flow of value from natural capital stocks to the economy. The terminology can be confusing because the word “services” is used in two distinct ways in economics: it refers both to economic goods like haircuts or financial advice, and to flows of value from natural capital. In the earth economy modeling framework, the value of natural resources is conceptualized as an ecosystem service, falling specifically into the category of provisioning services.

Historical Development: Chauvin and Wally (1984)

Moving to the historical foundations of CGE modeling, the seminal work of Chauvin and Wally from 1984 introduced computable general equilibrium models with strong links to empirical data and practical policy analysis. Their explicit goal was to transform the abstract Walrasian general equilibrium structure from a theoretical framework into realistic representations of actual economies that could be used for policy evaluation. The advent of modern computers made it possible to model economies with many sectors and commodities, freeing economists from the limitations of low-dimensional models that had to be solved by hand or with simple analytical techniques.

Earlier contributions had already begun expanding general equilibrium analysis beyond simple theoretical models. Leontief’s input-output analysis and Harberger’s work on tax incidence both expanded general equilibrium thinking to multiple sectors and policy-relevant questions. A crucial methodological breakthrough came from Scarf in 1967, who developed algorithms for numerically solving Walrasian systems of equations, making large-scale computation feasible.

The Chauvin-Wally Framework

The framework presented by Chauvin and Wally parallels the structure of standard economic models in many respects. Each consumer in the model has an endowment of resources and a set of preferences over goods, which together generate demand functions for each commodity. Markets depend on the complete vector of all prices in the economy, and demand functions must satisfy Walras’s Law, which states that for any set of prices, total consumer expenditure exactly equals total income. This budget constraint ensures that markets can clear simultaneously.

Producers in the model maximize their surplus, choosing production levels to maximize profits given prices and technology. Equilibrium is formally defined as the particular set of prices and production levels where market demand equals market supply for all commodities simultaneously. Chauvin and Wally simplified their model to include two final goods—manufacturing and non-manufacturing—two input factors—capital and labor—and two types of consumers, with one owning all capital and the other owning all labor. This structure allows for analysis of distributional effects, showing how policies affect different groups differently, though most practical CGE applications use a single representative consumer for simplicity.

Production Technology: CES and CET Functions

Production in CGE models is typically defined using functional forms that assume either constant returns to scale or constant elasticity of substitution. The CES, or constant elasticity of substitution, production function is particularly important for describing substitution possibilities among inputs in both consumption and production. The key feature of CES functions is that the elasticity of substitution remains constant throughout the entire function, regardless of the input mix. The mathematical form of the CES function includes several parameters: a factor productivity term, often called total factor productivity or TFP, share parameters that determine the relative importance of different inputs, and a substitution parameter called rho, which relates directly to the elasticity of substitution. This functional form allows for smooth substitution between inputs like capital and labor while maintaining mathematical tractability.

The CET, or constant elasticity of transformation, function serves an analogous role on the output side. CET functions describe trade-offs in production, such as how a firm allocates productive resources between different possible outputs or how domestic production is divided between domestic sales and exports. Both CES and CET functions have mathematical properties that make them particularly convenient for solving large systems of simultaneous equations, which explains their widespread adoption in CGE modeling.

The Global Trade Analysis Project (GTAP)

GTAP, which stands for Global Trade Analysis Project, was founded in 1997 as a consortium of international organizations and national government agencies. The project began with the primary goal of developing a high-quality, harmonized database of economic indicators specifically designed for trade policy analysis. Balancing global trade data presents enormous challenges because real-world data reported by different countries are frequently inconsistent—one country’s reported exports to another often do not match that second country’s reported imports from the first. GTAP invests substantial effort in creating a consistent, balanced dataset that reconciles these discrepancies.

The term GTAP refers both to the database itself and to the CGE model built to work with that database. The model was originally developed by Tom Hertel and comprehensively documented in what is commonly called the “GTAP Book.” The most recent database documentation is found in Aguiar et al. from 2019, though the version that will be the primary focus for this course is documented in Narayanan et al. from 2017. This documentation is remarkably comprehensive and detailed, providing extensive information about both the database construction and the model structure.

Looking Ahead: Course Structure

Looking forward to the remainder of the course, students will encounter one problem set that includes a simple CGE question that does not require software, focusing instead on conceptual understanding and analytical reasoning. The remainder of the problem sets will involve hands-on software work, specifically using the RunGTAP application. RunGTAP provides a user-friendly graphical interface for running the GTAP model and working with the GTAP database, making it accessible even to students without extensive programming experience. Detailed installation instructions for RunGTAP will be distributed soon, allowing students to begin familiarizing themselves with the software before the assignments require its use.

Transcript

Welcome to Lecture 9, Introduction to Computable General Equilibrium Modeling. This will be our main topic for today, but first, I want to discuss the newest assignment I just posted. I’ll also leave time at the end of class to talk about Assignment 3 in more depth if anyone has questions. We’ll focus on Assignment 4, which centers on INVEST, but this time coding INVEST in Python—a whole new language. You might wonder, how do we learn languages so quickly? I want to take a moment to reflect on the opportunities and existential threats to our profession as economists that come from AI. We’ll get to that, then move into the standard information on introducing CGEs themselves. We’ll discuss our reading, Chauvin and Wally, 1984, a seminal work, as well as some earlier developments. Then, we’ll introduce some key equations, including CES and CET functions, and transition to discussing implementation in GTAP and related specifics.

To get started, please load up the Earth Economy DevStack code workspace. My animations didn’t work because I didn’t go full screen, but do not click the VS Code application icon directly—this can be confusing. Instead, launch it by opening the eartheconomydevstack.code-workspace file, which you should have cloned into your user’s file directory. This approach opens VS Code in a mode with multiple folders loaded. For example, I have many folders loaded, and Git and GitHub are integrated in VS Code. You’ll need at least the Earth Economy dev stack and our class repository loaded. If you don’t see the class repository in your VS Code Explorer tab, right-click in the area and add the folder to the workspace. You can save the workspace so you don’t have to do this each time, which will update the code workspace file. Does everyone have the class repository and the Earth Economy dev stack loaded into VS Code?

Excellent. In this context, you may have noticed that Assignment 4 requires knowing Python syntax, but we haven’t discussed Python yet. This reflects the new era we’re in with AI. I also previously taught the Undergrad Introduction to Microeconomics 1101, which fulfills a writing requirement. I had to rethink what that means because, as I’ve mentioned before, anyone using AI to write the exact prompt I gave previous students can produce something almost flawless. We’ve had to reconsider what it means to teach writing in an era where a single prompt can generate a complete essay in seconds. The university is still debating this, and there’s a lot of introspection.

For now, I’m focusing more on discussion and ensuring students can generate answers themselves, rather than just producing essays. In a similar way, software AI can generate very good code quickly. This is both powerful and dangerous—not in the sense of a singularity, but because if you use AI exclusively, you may find yourself in a situation where you don’t know why your code works, even though it does. It’s like investing in the short term versus the long term. In 8222, which many of you are taking, we’ll focus more on understanding why software works as it does, with a lot of hands-on, old-school coding.

For now, though, we’re going to embrace the “dark arts.” Has anyone heard of “vibe coding”? Vibe coding is using AI tools to generate code without fully understanding it. For example, I created an agent-based simulation in about three minutes using AI. It’s an agent-based replacement for a CGE, with good graphics and controls. I specified the frameworks and libraries, but the rest was coded by Claude. I understand the code, but it’s easy to see how you could create impressive things without understanding the underlying code. That’s vibe coding—very powerful, but the downside is you might not understand why it works, and if it stops working, you may not be able to troubleshoot it. That’s why I call it the dark arts.

Nonetheless, we’re embracing it here. In this class, you can use AI as much as you want—our syllabus allows it with or without attribution, but any mistakes are your responsibility. There was an article today about “comprehension debt,” the ticking time bomb of large language model-generated code. This is similar to “technical debt,” where shortcuts in code accumulate and need to be fixed later. Comprehension debt refers to the growing amount of LLM-generated code that we don’t fully understand, which can make maintenance difficult.

A recent preprint from Harvard, “Generative AI as Seniority-Biased Technical Change,” examines how generative AI might bias technological change, particularly in hiring patterns. They found that after the release of ChatGPT 3.5, junior employment in firms adopting AI declined sharply relative to non-adopters, while senior employment continued to rise. This suggests that the value of being an experienced programmer—someone who understands why things work—is increasing, while the value of low-level coding is decreasing, as AI can replace many low-level tasks but not high-level reasoning.

Despite these challenges, we’re going to use AI in this class. I want to show you how I would approach Assignment 4 using AI. Much of the assignment involves running the INVEST model, as we’ve done before. Instead of dedicating a lecture to using GDAL to open a raster or iterating over a for loop to call INVEST multiple times, I suggest working iteratively with AI.

When we ran INVEST previously, after solving the model, we could save the output as a Python file. This file is in our class repository, generated by INVEST 3.13—yours may look different. For the assignment, I would copy both the code and the question, and paste them into ChatGPT or another LLM, providing full context. It’s important to include both the question and the starter code, as the model may not know the specifics of INVEST otherwise.

For example, after pasting both, the LLM can generate code that modifies the arguments for different scenarios, such as increasing carbon in residential land use classes by 50%, 100%, 150%, or 200%. The LLM can even identify the relevant columns in the biophysical table and create a dictionary for the scenarios, modifying the args dictionary and rerunning the model for each scenario. This is good programming, but it also means you’re getting the answer directly from the AI. Hopefully, you’ll still develop intuition from the rest of the assignment, which asks you to write about your results.

Professional software programmers are embracing these tools, and it’s important to understand both how to use them and how to build intuition. Some universities initially banned LLMs, considering their use plagiarism, but enforcement is difficult, and the distinction between AI- and human-generated writing is becoming impossible to detect. Writing and coding are changing, and the best strategy is to understand both sides—use AI tools, but also work to build your own understanding.

Now, let’s return to computable general equilibrium models. I’ll stop sharing and go full screen. Let’s talk about CGEs.

First, a shout out to a great video showing a physical economic model—the Montiac—which uses pumps and reservoirs to represent economic flows, such as consumption and savings. The height of the water in each reservoir indicates the accumulation of capital or other stocks. This is essentially what we’re building today: economic models. Fortunately, we can use mathematics instead of plumbing.

We’ve discussed earth economy modeling and GTAP INVEST, and we’re moving toward linking these models. We’ve spent time running INVEST, but not much on the CGE side, so we’ll focus on that now.

Let’s talk about the CGE component, starting from first principles. Everyone has learned about the circular flow diagram in Econ 101. It’s a foundational concept, though we often move on to equations and calculus. The basic idea is that agents (households and firms) are linked by markets, and prices result from interactions in these markets—supply and demand. Equilibrium is defined by the prices that clear all markets.

In addition to financial flows, there are physical flows of goods and services. For simplicity, we’ll focus on financial flows, as in the GTAP model. The circular flow diagram can be expanded to represent the GTAP CGE economy, which includes more boxes but retains the key elements: producers (firms), private households, and a regional household. The regional household is a modeling convenience, allocating income among private expenditure, savings, and government. It’s calibrated for each country, reflecting different savings rates and tax structures.

The regional household connects with systems of national accounts, aligning with macroeconomic identities like C + I + G + X – M. The main computational detail relevant to policy happens on the producer side. CGEs, and GTAP specifically, include many sectors—65 in the current version—each with its own production function. The production function combines endowments (capital, labor, natural resources) and intermediates (inputs from other sectors). Most real-world firms use both endowments and intermediates, and intermediates often dominate costs.

In CGEs, “value added” is synonymous with endowments. GTAP uses 65 sectors because that’s the level of detail supported by global data. Each sector’s production function combines value-added goods and intermediates, optimizing input use. Inputs can be domestic or imported, and the model accounts for differences in prices, tariffs, and product differentiation (e.g., champagne from France vs. sparkling wine elsewhere). GTAP models multiple countries (currently 160), each with its own circular flow, linked through trade.

The regional household collects all income generated in the economy and allocates it among private households, savings, and government, typically using a Cobb-Douglas function. This structure allows for policy analysis, such as examining the effects of changing savings rates.

On the producer side, GTAP includes detailed modeling of 65 sectors, each combining endowments and intermediates. Inputs can be domestic or imported, and the model tracks these flows across countries. This complexity allows for realistic modeling of global trade and production.

Applied earth economy modeling connects earth systems—natural capital, ecosystems, climate, water, soil—to the economy through ecosystem services. Some services directly generate utility (e.g., recreation), while others affect production (e.g., pollination). Marketed ecosystem services (like timber or oil) have prices, while non-marketed services (like wild pollination) do not, but both impact economic activity. Earth economy modeling aims to include these linkages to avoid undervaluing non-marketed services.

Any questions on the conceptual framework? This is the cutting edge of applied modeling. Natural resources are considered ecosystem services—the flow of value from natural capital to the economy. The terminology can be confusing, as “services” refers both to goods like haircuts and to flows from natural capital. In this framework, the value of natural resources is considered an ecosystem service, specifically a provisioning service.

Moving to Chauvin and Wally (1984), this seminal work introduced CGEs with strong links to data and policy analysis. Their goal was to convert the Walrasian general equilibrium structure from an abstract model into realistic representations of actual economies. With the advent of computers, it became possible to model economies with many sectors, freeing us from the limitations of low-dimensional, hand-solved models.

Earlier work, like that of Leontief and Harberger, expanded general equilibrium analysis to multiple sectors and policy questions. Scarf (1967) developed algorithms for numerically solving Walrasian systems. Chauvin and Wally’s framework is similar to standard economic models: each consumer has an endowment and preferences, leading to demand functions for each commodity. Markets depend on all prices, and demand functions must satisfy Walras’s Law: for any set of prices, total consumer expenditure equals income, ensuring market clearing.

Producers maximize surplus, and equilibrium is defined as the set of prices and production levels where market demand equals supply for all commodities. Chauvin and Wally simplified their model to two final goods (manufacturing and non-manufacturing), two input factors (capital and labor), and two types of consumers (one owning all capital, one all labor). This allows for analysis of distributional aspects, though most CGEs use a single representative consumer.

Production is defined using constant returns to scale or constant elasticity of substitution (CES) production functions. CES describes substitution among inputs in consumption or production, maintaining constant elasticity throughout the curve. The CES function includes a factor productivity term (often called total factor productivity, TFP), share parameters, and a substitution parameter (rho), which relates to elasticity. The function allows for smooth substitution between inputs like capital and labor.

CET (constant elasticity of transformation) functions describe trade-offs in production, such as allocating resources between different outputs. Both CES and CET have mathematical properties that make them convenient for solving large systems of equations.

GTAP stands for Global Trade Analysis Project, founded in 1997. It began as a consortium of international organizations and national partners, developing a high-quality, harmonized database of economic indicators for trade analysis. Balancing global trade data is challenging, as real-world data are often inconsistent. GTAP invests in creating a consistent, balanced dataset.

GTAP refers both to the database and the CGE model itself. The model was originally developed by Tom Hertel and documented in the “GTAP Book.” The most recent database documentation is Aguiar et al. (2019). We’ll focus on the version documented in Narayanan et al. (2017), which is comprehensive and detailed.

Looking ahead, we’ll have one problem set with a simple CGE question (non-software), and the rest will involve hands-on software, specifically the RunGTAP application, which provides a user-friendly interface for running the GTAP model and database. Instructions for installation will be sent out soon.

That’s a good place to stop for today. Any questions? Cool.