Price Elasticity of Demand

The price elasticity of demand measures the responsiveness of quantity demanded to a change in price, with all other factors held constant.


The price elasticity of demand, Ed is defined as the magnitude of:

proportionate change in quantity demanded
proportionate change in price

Since the quantity demanded decreases when the price increases, this ratio is negative; however, the absolute value usually is taken and Ed is reported as a positive number.

Because the calculation uses proportionate changes, the result is a unitless number and does not depend on the units in which the price and quantity are expressed.

As an example calculation, take the case in which a product's Ed is reported to be 0.5. Then, if the price were to increase by 10%, one would observe a decrease of approximately 5% in quantity demanded.

In the above example, we used the word "approximately" because the exact result depends on whether the initial point or the final point is used in the calculation. This matters because for a linear demand curve the price elasticity varies as one moves along the curve. For small changes in price and quantity the difference between the two results often is negligible, but for large changes the difference may be more significant. To deal with this issue, one can define the arc price elasticity of demand. The arc elasticity uses the average of the initial and final quantities and the average of the initial and final prices when calculating the proportionate change in each. Mathematically, the arc price elasticity of demand is defined as:

Q2 - Q1
( Q1 + Q2 ) / 2
P2 - P1
( P1 + P2 ) / 2


Q1 = Initial quantity
Q2 = Final quantity
P1 = Initial price
P2 = Final price

Elastic versus Inelastic

E > 1
In this case, the quantity demanded is relatively elastic, meaning that a price change will cause an even larger change in quantity demanded. The case of Ed = infinity is referred to as perfectly elastic. In this theoretical case, the demand curve would be horizontal. For products having a high price elasticity of demand, a price increase will result in a revenue decrease since the revenue lost from the resulting decrease in quantity sold is more than the revenue gained from the price increase.

E <>
In this case, the quantity demanded is relatively inelastic, meaning that a price change will cause less of a change in quantity demanded. The case of Ed = 0 is referred to as perfectly inelastic. In this theoretical case, the demand curve would be vertical. For products whose quantity demanded is inelastic, a price increase will result in a revenue increase since the revenue lost by the relatively small decrease in quantity is less than the revenue gained from the higher price.

E = 1
In this case, the product is said to have unitary elasticity; small changes in price do not affect the total revenue.

Factors Affecting the Price Elasticity of Demand

  • Availability of substitutes: the more possible substitutes, the greater the elasticity. Note that the number of substitutes depends on how broadly one defines the product.
  • Degree of necessity or luxury: luxury products tend to have greater elasticity. Some products that initially have a low degree of necessity are habit forming and can become "necessities" to some consumers.
  • Proportion of the purchaser's budget consumed by the item: products that consume a large portion of the purchaser's budget tend to have greater elasticity.
  • Time period considered: elasticity tends to be greater over the long run because consumers have more time to adjust their behavoir.
  • Permanent or temporary price change: a one-day sale will elicit a different response than a permanent price decrease.
  • Price points: decreasing the price from $2.00 to $1.99 may elicit a greater response than decreasing it from $1.99 to $1.98.

Industry Concentration

The concentration of firms in an industry is of interest to economists, business strategists, and government agencies. Here, we discuss two commonly-used methods of measuring industry concentration: the Concentration Ratio and the Herfindahl-Hirschman Index.

Concentration Ratio (CR)

The concentration ratio is the percentage of market share owned by the largest m firms in an industry, where m is a specified number of firms, often 4, but sometimes a larger or smaller number. The concentration ratio often is expressed as CRm, for example, CR4.

The concentration ratio can be expressed as:

CRm = s1 + s2 + s3 + ... ... + sm

where si = market share of the ith firm.

If the CR4 were close to zero, this value would indicate an extremely competitive industry since the four largest firms would not have any significant market share.

In general, if the CR4 measure is less than about 40 (indicating that the four largest firms own less than 40% of the market), then the industry is considered to be very competitive, with a number of other firms competing, but none owning a very large chunk of the market. On the other extreme, if the CR1 measure is more than about 90, that one firm that controls more than 90% of the market is effectively a monopoly.

While useful, the concentration ratio presents an incomplete picture of the concentration of firms in an industy because by definition it does not use the market shares of all the firms in the industry. It also does not provide information about the distribution of firm size. For example, if there were a significant change in the market shares among the firms included in the ratio, the value of the concentration ratio would not change.

Herfindahl-Hirschman Index (HHI)

The Herfindahl-Hirschman Index provides a more complete picture of industry concentration than does the concentration ratio. The HHI uses the market shares of all the firms in the industry, and these market shares are squared in the calculation to place more weight on the larger firms. If there are n firms in the industry, the HHI can be expressed as:

HHI = s12 + s22 + s32 + ... ... + sn2

where si is the market share of the ith firm.

Unlike the concentration ratio, the HHI will change if there is a shift in market share among the larger firms.

The Herfindahl-Hirschman Index is calculated by taking the sum of the squares of the market shares of every firm in the industry. For example, if there were only one firm in the industry, that firm would have 100% market share and the HHI would be equal to 10,000 -- the maximum possible value of the Herfindahl-Hirschman Index. On the other extreme, if there were a very large number of firms competing, each of which having nearly zero market share, then the HHI would be close to zero, indicating nearly perfect competition.

The U.S. Department of Justice uses the HHI in guidelines for evaluating mergers. An HHI of less than 1000 represents a relatively unconcentrated market, and the DOJ likely would not challenge a merger that would leave the industry with an HHI in that range. An HHI between 1000 and 1800 represents a moderately concentrated market, and the DOJ likely would closely evaluate the competitive impact of a merger that would result in an HHI in that range. Markets having an HHI greater than 1800 are considered to be highly concentrated; there would be serious anti-trust concerns over a proposed transaction that would increase the HHI by more than 100 or 200 points in a highly concentrated market.

Other Considerations in Using Industry Concentration Measures

One should be aware that these measures are influenced by the definition of the relevant market. For example, the automotive industry is not the same as the market for sport utility vehicles. One also must consider the geographic scope of the market, for example, national markets versus local markets.

Game Theory

Game theory analyzes strategic interactions in which the outcome of one's choices depends upon the choices of others. For a situation to be considered a game, there must be at least two rational players who take into account one another's actions when formulating their own strategies.

If one does not consider the actions of other players, then the problem becomes one of standard decision analysis, and one is likely to arrive at a strategy that is not optimal. For example, a company that reduces prices to increase sales and therefore increase profit may lose money if other players respond with price cuts. As another example, consider a risk averse company that makes its decisions by maximizing its minimum payoff (maxmin strategy) without considering the reactions of its opponents. In such a case, the minimum payoff might be one that would not have occurred anyway because the opponent might never find it optimal to implement a strategy that would make it come about. In many situations, it is crucial to consider the moves of one's opponent(s).

Game theory assumes that one has opponents who are adjusting their strategies according to what they believe everybody else is doing. The exact level of sophistication of the opponents should be part of one's strategy. If the opponent makes his/her decisions randomly, then one's strategy might be very different than it would be if the opponent is considering other's moves. To analyze such a game, one puts oneself in the other player's shoes, recognizing that the opponent, being clever, is doing the same. When this consideration of the other player's moves continues indefinitely, the result is an infinite regress. Game theory provides the tools to analyze such problems.

Game theory can be used to analyze a wide range of strategic interaction environments including oligopolies, sports, and politics. Many product failures can be attributed to the failure to consider adequately the responses of competitors. Game theory forces one to consider the range of a rival's responses.

Elements of a Game

  • Players: The decision makers in the game.
  • Actions: Choices available to a player.
  • Information: Knowledge that a player has when making a decision.
  • Strategies: Rules that tell a player which action to take at each point of the game.
  • Outcomes: The results that unfold, such as a price war, world peace, etc.
  • Payoffs: The utilities (or happiness) that each player realizes for a particular outcome.
  • Equilibria: An equilibrium is a stable result. Equilibria are not necessarily good outcomes, a fact that is illustrated by the prisoner's dilemma.

Game Theory Framework

When evaluating a situation in which game theory is applicable, the following framework is useful.

  1. Define the problem
  2. Identify the critical factors. Examples of critical factors include differentiated products, first-mover advantage, entry and exit costs, variable costs, etc.
  3. Build a model, such as a bimatrix game or an extensive form game.
  4. Develop intuition by using the model
  5. Formulate a strategy - cover all possible scenarios.

A good strategy could be used as a set of instructions for someone who knows nothing about the problem. It specifies the best action for each possible observation. The best strategy may be formulated by first evaluating the complete set of strategies. The complete set of strategies is a list of all possible actions for each possible observation.

Bimatrix Games

In a bimatrix game, there are two players who effectively make their moves simultaneously without knowing the other player's action. A bimatrix game can be represented by a matrix of rows and columns. Each cell in the matrix has a pair of numbers representing the payoff for the row player (the first number) and the column player (the second number). The game has the following form:

Row Player

Column Player (CP)

CP Option 1

CP Option 2

RP Option 1

(Payout to RP, Payout to CP)

(Payout to RP, Payout to CP)

RP Option 2

(Payout to RP, Payout to CP)

(Payout to RP, Payout to CP)

The general form of equilibrium in a bimatrix game is called a Nash Equilibrium. If both rivals have dominant strategies that coincide, then the equilibrium is called a dominant strategy equilibrium, a special case of a Nash equilibrium.

A dominant strategy, if it exists, is for one of the players the strategy that is always the best strategy regardless of what one's rival plays. A dominated strategy is one that is always the worst regardless of what one's rival plays. In games having more than two rows or problems, one may find it useful to identify one option that is always better or worse than another option, in other words, that dominates or is dominated by another option. In this case, the inferior strategy can be eliminated and the game simplified such that more options can be eliminated based on the smaller matrix.

If no options dominate any others, a Nash equilibrium might still be found by evaluating each player's best option for each option of the opponent. If a cell coincides for both players, then that cell is a Nash equilibrium. A game can have more than one Nash equilibrium, but one of them may be the more likely outcome if it is better for both players.

Extensive Form Games

Extensive form games are modeled with dots with arrows that point to other dots. A node is a decision point. The beginning point is depicted by an open dot, which usually represents a state from which a situation will arise by chance. Decision points are labeled with the name of the player making the decision. The following diagram shows the structure of an extensive-form game representation.

Structure of an Extensive Form Game

Chance nodes can appear anywhere in the extensive form tree.

An information set is a collection of nodes that are controlled by the same player, but which are indistinguishable for that player. In other words, for nodes in the same information set, the player does not know which one he/she is at, but does know that these nodes are different. In the preceding diagram, the drawn information sets might arise if Decision A and Decision C were indistinguishable to Player 2, as well as Decision B and Decision D. If a single dotted line encompassed all the Player 2 decision nodes (or 4 dotted circles all connected), then Player 2 would not be able to distinguish between any of the four decisions.

An extensive form game without information sets designated is one in which the players know exactly where they are in the tree. This situation is equivalent to one of dotted circles drawn around each decision point in the tree but not connected to one another. If neither player can observe anything about the other player's action, the sequential extensive form game can be reduced to the simultaneous-action bimatrix game.

Normal-Form (Strategic Form) Game Representation

The extensive form of representing a game can become difficult to manage as the game gets larger, and the Nash equilibria may become difficult to find. The extensive form representation can be collapsed into the normal form, which encodes the game into a strategy that describes the action to take for each conceivable situation (for example, for each information set). The normal form is a complete listing of all the possible strategies along with their payoffs. For a generic case in which there are three situations (information sets) based on Player 1's move, and two possible responses by Player 2, the normal form takes the following structure:

Player 1

If Player 1
does A, then

If Player 1
does B, then

If Player 1
does C, then




Player 2


























































Signaling and Threats

A signal is a commitment that changes the strategic situation. For example, if there exists danger of a price war, a smaller firm may sell a plant in order to reduce capacity and limit the amount of a larger firm's demand that it can steal, signaling that it does not intend to build substantial market share. As another example, a sports player who for some reason refuses to undergo an medical exam before negotiating his salary will raise doubts as to whether he has a medical condition that might affect his performance. This player might wish to signal that he does not have such a condition by proposing that his pay be tied to his performance. For a signal to be effective, its cost to a bluffer must exceed the benefits.

A threat is credible if it is believable. A threat is believable if it is in the best interest of the one making the threat to carry it out.


Game theory can be used to analyze strategies for auctions. Auctions are useful mechanisms for price determination. Common auction formats include the English auction, Dutch auction, and sealed bid auction.

Legal Issues

One must consider that just because a possible action of one's opponent is illegal, this technicality might not prevent the action from being taken. For example, if a small but growing company's opponent is contemplating an illegal action, one should not rule out the possibility of that action without considering the cost of filing an anti-trust suit. The time and money required might not be the best thing for the development of the business. Even if the opponent is found guilty, the small company may be out of business by the time the suit is resolved. On the other hand, the publicity from such a lawsuit might be exactly what the smaller player needs to build brand awareness and win over new customers. (This is an example of exploitation of a big versus small asymmetry.)


Game theory helps one to develop optimal strategies. In an environment in which many outcomes are pre-determined when sophisticated players follow their best strategies, the way to improve one's payoff is to change the actual structure of the strategic interactions before the game is even played.


Auctions are mechanisms for determining prices. Auctions often are classified as one of the following auction types:

· First-price sealed-bid auction - winner pays his bid. In this case, one should bid below one's value an amount that depends on how many other bidders there are. The more bidders, the closer to one's value that one should bid. There is a tradeoff between profit and the frequency of winning.

· Second-price sealed-bid auction - winner pays highest losing bid. In this type of auction, the optimal strategy is to bid one's value.

· English auction - auctioneer begins with a low price. Bidders raise their bids until nobody is willing to bid higher. The optimal strategy in an English auction is to bid up to one's value, staying in the auction until the bids exceed one's value.

· Dutch auction - auctioneer calls out prices beginning with a very high value and gradually reduces it. The first bidder to accept an offered price wins. The Dutch auction gets its name because of its use in the flower markets in Holland. Note that eBay defines Dutch auctions differently. On eBay, a Dutch auction is one in which there are multiple units of the same item and all successful bidders pay a price equal to the lowest successful bid.

Auctions also may be characterized by whether the value is common (the same) to the bidders, for example, an oil lease, or by whether the value is private (different) to each bidder.

Single-round sealed-bid auctions can be analyzed as multiple-player, simultaneous choice games. In a sealed-bid auction, one has a tendency to win when one bids more than one's value. This phenomenon is known as the winner's curse. In an open-auction, the winner's curse is less pronounced because information from other bidders helps one to value it.

One's strategy depends on whether other bidders are simply bidding their value or are "shading" their bids. One can evaluate potential strategies using deviation logic. This involves asking oneself if one would utilize the strategy if everybody else were utilizing it.


Gross Domestic Product

Economic growth is measured in terms of an increase in the size of a nation's economy. A broad measure of an economy's size is its output. The most widely-used measure of economic output is the Gross Domestic Product (abbreviated GDP).

GDP generally is defined as the market value of the goods and services produced by a country. One way to calculate a nation's GDP is to sum all expenditures in the country. This method is known as the expenditure approach and is described below.

Expenditure Approach to Calculating GDP

The expenditure approach calculates GDP by summing the four possible types of expenditures as follows:







Government Purchases


Net Exports

Consumption is the largest component of the GDP. In the U.S., the largest and most stable component of consumption is services. Consumption is calculated by adding durable and non-durable goods and services expenditures. It is unaffected by the estimated value of imported goods.

Investment includes investment in fixed assets and increases in inventory.

Government purchases are equal to the government expenditures less government transfer payments (welfare, unemployment payouts, etc.)

Net exports are exports minus imports. Imports are subtracted since GDP is defined as the output of the domestic economy.

Alternative Approaches to Calculating GDP

There are three approaches to calculating GDP:

· expenditure approach - described above; calculates the final spending on goods and services.

· product approach - calculates the market value of goods and services produced.

· income approach - sums the income received by all producers in the country.

These three approaches are equivalent, with each rendering the same result.

Final Sales as a GDP Predictor

Note that an increase in inventory will increase the GDP but possibly result in a lower future GDP as the excess inventory is depleted. To eliminate this effect, the final sales can be calculated by subtracting the increase in inventory from GDP. The final sales can be either larger or smaller than GDP. The change in inventory is an important signal of the next period's GDP.

Nominal GDP and Real GDP

Without any adjustment, the GDP calculation is distorted by inflation. This unadjusted GDP is known as the nominal GDP. In practice, GDP is adjusted by dividing the nominal GDP by a price deflator to arrive at the real GDP.

In an inflationary environment, the nominal GDP is greater than the real GDP. If the price deflator is not known, an implicit price deflator can be calculated by dividing the nominal GDP by the real GDP:

Implicit Price Deflator = Nominal GDP / Real GDP

The composition of this deflator is different from that of the consumer price index in that the GDP deflator includes government goods, investment goods, and exports rather than the traditional consumer-oriented basket of goods.

GDP usually is reported each quarter on a seasonally adjusted annualized basis.

GDP Growth

Countries seek to increase their GDP in order to increase their standard of living. Note that growth in GDP does not result in increased purchasing power if the growth is due to inflation or population increase. For purchasing power, it is the real, per capita GDP that is important.

While investment is an important factor in a nation's GDP growth, even more important is greater respect for laws and contracts.

GDP versus GNP

GDP measures the output of goods and services within the borders of the country. Gross National Product (GNP) measures the output of a nation's factors of production, regardless of whether the factors are located within the country's borders. For example, the output of workers located in another country would be included in the workers' home country GNP but not its GDP. The Gross National Product can be either larger or smaller than the country's GDP depending on the number of its citizens working outside its borders and the number of other country's citizens working within its borders.

In the United States, the Gross National Product (GNP) was used until the early 1990's, when it was changed to GDP in order to be consistent with other nations.

Consumer Price Index

The most commonly reported measure of the consumer price levels in the United States is the Consumer Price Index (CPI). Published by the U.S. Department of Labor 's Bureau of Labor Statistics, the CPI is a fixed-weight price index using a fixed basket of goods that are representative of what a typical consumer purchases each month.

There are many different CPI's calculated by region, types of products, types of consumers, etc. The most commonly reported CPI is the CPI-U, which is the CPI for all urban consumers.

Increases in the CPI level serve as a measure of the consumer inflation rate. The rate of inflation over a period of time is simply the percentage increase in the CPI over the period, often reported on an annualized basis.

Uses of the CPI

The CPI has many important uses, including the following:

· Economic indicator - the CPI is the most commonly reported measure of consumer prices.

· Reference for escalation agreements - labor contracts and other payment agreements that are indexed to inflation rely on the CPI.

· Deflator for economic series - when a series of data is to be adjusted so that it is reported in constant dollars, the CPI often is used as the deflator.

Because of the widespread use of the CPI, especially for adjusting payments to inflation, its accuracy can have a significant impact on the economy. In recent years, the accuracy of the CPI has been questioned due to a number of biases that cause it to overstate the effective rate of inflation.

CPI Biases

The CPI tends to overstate inflation because of the following biases:

· Substitution bias - when the price of a product in the consumer basket increases substantially, consumers tend to substitute lower-priced alternatives. For example, if a freeze in Florida causes the price of oranges to skyrocket, consumers may substitute Texas grapefruits for Florida oranges. Since the CPI is a fixed-weight price index, it would not accurately predict the impact of the price increase on the consumer's budget.

· Quality bias - over time, technological advances increase the life and usefulness of products. For example, the useful life of automobile tires increased substantially over the past few decades, decreasing the tire cost on a per mile basis, but the CPI does not reflect such improvements.

· New product bias - new products are not introduced into the index until they become commonplace, so the dramatic price decreases often associated with new technology products are not reflected in the index.

· Outlet bias - the consumer shift to new outlets such as wholesale clubs and online retailers is not well-represented by the CPI.

Some economists estimate that such biases overstate the CPI by about 1% per year.

The U.S. Department of Labor has responded to these biases by more frequently changing the base period when the items in the index and their weights are adjusted. Also, the government now is quicker to add new products to the CPI basket and has made quality adjustments to the index.

The Business Cycle

Economic growth is not a steady phenomenon; rather, it tends to exhibit a pattern as follows:

  1. an expansion of above-average growth
  2. a peak
  3. a contraction of below-average growth
  4. a trough or low-point

The troughs then are followed by periods of expansion and the cycle generally repeats, though not in a regular manner. These fluctuations in economic growth are known as the business cycle and are depicted conceptually in the following diagram:

The Business Cycle

Indicators of the Business Cycle

Because the business cycle is related to aggregate economic activity, a popular indicator of the business cycle in the U.S. is the Gross Domestic Product (GDP). The financial media generally considers two consecutive quarters of negative GDP growth to indicate a recession. Used as such, the GDP is a quick and simple indicator of economic contractions.

However, the National Bureau of Economic Research (NBER) weighs GDP relatively low as a primary business cycle indicator because GDP is subject to frequent revision and it is reported only on a quarterly basis (the business cycle is tracked on a monthly basis). The NBER relies primarily on indicators such as the following:

  • employment
  • personal income
  • industrial production

Additionally, indicators such as manufacturing and trade sales are used as measures of economic activity.

Notable Business Cycle Expansions and Contractions

According to the National Bureau of Economic Research, the longest U.S. economic expansion on record began in March 1991 and lasted until March 2001, a duration of 10 years.

The longest economic contraction in the NBER databse was the 65 month contraction from October 1873 until March 1879. By comparison, the contraction that began in 1929 and that initiated the Great Depression lasted 43 months from August 1929 until March 1933.

Business Cycle Intensity Over Time

Many economists believe that the business cycle has become less pronounced, exhibiting briefer and shallower economic contractions. While there is economic data to support a diminished business cycle, other economists argue that the data prior to 1929 was not very accurate and tended to overstate the magnitude of the economic swings.

Whether the business cycle has become less intense has practical importance because after World War II the U.S. government initiated policies with the intent to minimize the severity of economic contractions, so a decrease in the intensity of the contractions would support the arguments of those who advocate such policies. Whether the business cycle really has declined in severity is a question that remains open to debate.


The percentage of the labor force that is seeking a job but does not have one is known as the unemployment rate. The unemployment rate is defined as follows:

Unemployed Workers

x 100%

Employed + Unemployed Workers

Unemployed workers are those who are jobless, seeking a job, and ready to work if they find a job.

The sum of the employed and unemployed workers represent the total labor force. Note that the labor force does not include the jobless who are not seeking work, such as full-time students, homemakers, and retirees. They are considered to be outside the labor force.

The labor force participation rate is the percentage of the adult population that is part of the total labor force. All of these measures consider only persons 16 years of age or older.

The movement among the three groups can be illustrated as shown in the following diagram.

Model of Labor Force Movement



Not in the
Labor Force

The diagram shows seven possible movements:

1. Employed to Employed - an employed person moves directly from one job to another job.

2. Employed to Unemployed - an employed person moves to unemployed status either as a job loser leaving against one's will or as a job quitter who leaves voluntarily with the intention to search for another job.

3. Employed to Not in the Labor Force - an employed person quits a job with no intention of immediately finding another job, for example, to return to school, to raise a family, or to retire.

4. Unemployed to Employed - an unemployed person finds and accepts a job.

5. Unemployed to Not in the Labor Force - an unemployed person ends the job search and leaves the labor force, often because of lack of success in finding a job after an extended period of time (discouraged workers).

6. Not in the Labor Force to Unemployed - a person who is not in the labor force begins a job search, for example, a student who seeks a job after graduation.

7. Not in the Labor Force to Employed - a person not in the labor force moves directly into a job, for example, a student with a job waiting upon graduation.

From this model, we see that a worker may end up in the grouping of "unemployed" from one of two possible paths: 1) by job separation, either as a job loser or a job quitter, and 2) by moving into the labor force, either as a new entrant or as a re-entrant.

Full Employment and the Natural Rate of Unemployment

It commonly has been the goal of policy makers to use monetary policy to achieve the goal of full employment in the economy. Over the years, several different definitions have been proposed for full employment, but such a definition is complicated by the fact that the economy always has some unemployment, even during economic expansions. This non-zero rate of unemployment is due to:

· Frictional unemployment - caused by the fact that it takes time for employers and workers to find an appropriate match. For example, job seekers tend to spend time to find the best possible job rather than take the first one available, and employers take the time to interview several candidates to find the best fit. Unemployment insurance increases frictional unemployment by decreasing the opportunity cost of unemployment, thereby increasing the lowest wage that the job seeker would be willing to accept and lengthening the job search.

· Structural unemployment - refers to unemployment caused by a mismatch between workers and jobs. This mismatch may be in geographical location or in skills. For example, technological change may have caused a worker's skills to become obsolete, and he or she may experience a period of unemployment before finding the opportunity to develop new skills and to adapt. The resulting surplus of labor (quantity supplied is greater than quantity demanded) is influenced by minimum wage laws, collective bargaining, and efficiency wages, all of which create higher wages that attract more people into the labor force while decreasing the demand for labor.

Since zero unemployment is unachievable in a free labor market, Milton Friedman used the term natural rate of unemployment to describe the baseline rate of unemployment, considering that some unemployment cannot be avoided. The natural rate of unemployment is the sum of the frictional and structural unemployment rates. It does not include cyclical unemployment that results from a downturn in the business cycle.

When the unemployment rate falls below its natural rate, there is upward pressure on wages, and the economy runs the risk of inflation. Rather than a simple trade-off between the rate of inflation and the rate of unemployment, under the natural rate hypothesis once the rate went below the natural rate, inflation would accelerate. The natural rate of unemployment became known as the non-accelerating inflation rate of unemployment (NAIRU).

The natural rate of unemployment changes over time. In the U.S., some mainstream economists have placed the natural rate of unemployment in the 5% to 6% range, though other economists have placed it as low as 4% and as high as 7% over the past several decades. This variability and lack of precision in the natural rate of unemployment represent a source of uncertainty with which policy makers must deal.

Public policy itself has an impact on the natural rate of unemployment. With regard to frictional unemployment and labor surplus we see at least two levers controlled by public policy: 1) unemployment insurance, and 2) minimum wage laws. As discussed above, both of these tend to increase the natural rate of unemployment, and there is a trade-off between the benefits of such labor policies and an increased natural rate of unemployment.

Seasonal Variations

The number of job seekers changes over the course of the year due to seasonal effects. For example, weather patterns, harvests, tourist seasons, school and university calendars, and holidays all influence unemployment numbers. If left unadjusted, such changes make it difficult to compare unemployment figures from one month to the next.

To address seasonal variations, the U.S. Bureau of Labor Statistics adjusts the monthly unemployment rate numbers based on a statistical analysis of previous years. The result is that the reported unemployment rate more accurately reflects the underlying state of the economy.

Duration of Unemployment

In addition to the unemployment rate itself, the average length of time that a person remains unemployed also is of interest. The severity of the impact of unemployment depends in part on how quickly and easily an unemployed person can find work. For example, teenagers have an unemployment rate that is much higher than average, but also find jobs quicker and therefore have a lower duration of unemployment. Statistics reported by the U.S. Department of Labor indicate that since 1948, the average duration of unemployment in the U.S. ranged from a low of approximately 7 weeks to a high of approximately 20 weeks.

Statistics such as the number of workers unemployed for more than half a year provide additional information about the unemployment situation.

Limitations of the Unemployment Rate Measurement

The unemployment rate is not a perfect indicator of employment in the economy. The following are some reasons:

· Discouraged workers - those who want a job but have given up looking and therefore do not fall within the definition of the labor force. These persons tend to make the reported unemployment rate lower than it otherwise would be.

· Collecting benefits but not job seeking - while a state unemployment office may require a person to actively seek a job in order to collect unemployment insurance benefits, some benefit recipients do not really want a job and do not put much effort into the job search. Due to this effect, the reported unemployment rate is higher than it otherwise might be.

· Underemployed - a person is counted as employed if he or she is working part-time; however, that person nonetheless may be seeking full-time work.

Discouraged workers and ones collecting unemployment benefits without seeking a job make it difficult to distinguish between those who are unemployed and those who are not in the labor force. These effects work in mixed directions; unemployment may be overstated or understated by the unemployment rate. As long as any bias in the unemployment rate is relatively constant over time, then the rate is still useful for measuring changes in the economy from one period to the next.

Other indicators such as the number of discouraged workers and part-time labor statistics all can supplement the unemployment rate data to provide additional insight.

Impact of Unemployment

Unemployment presents problems for both the individual and for the economy as a whole.

· Individual hardship (financial and psychological) can arise when a person needs a job and cannot find one. The individual's economic hardship is mitigated somewhat by unemployment insurance benefits.

· Aggregate economic output is less than the potential GDP level due to loss of production from those who are unemployed.



Copyright © 2008 - Lecture Theory - is proudly powered by Blogger
Smashing Magazine - Design Disease - Blog and Web - Dilectio Blogger Template