Accuracy targets

Accuracy targets DEFAULT

An Approach to Setting Forecast Accuracy Targets

One question that I often get is about setting a proper target for forecast accuracy for a company. This is often framed by practitioners in the following way: ‘My management team is challenging me to get the forecast accuracy higher. I just do not know how to set their expectations as to what the maximum achievable accuracy is given our business conditions.’ Let us attempt to address this question in this blog.

In this blog, we will avoid a discussion on what is a good forecast error metric and simply assume the one that is very commonly used: Mean Absolute Percent Error (MAPE).

Comparing to Other Companies

One obvious approach is to use a benchmarking survey or other published data. On any given day, it is possible to Google forecast accuracy data for many companies as reported in surveys, articles, and such. IBF publishes an annual survey of forecast accuracy data across industries. The problem with this approach is companies measure forecast accuracy differently and the data is generally not usable except at too high a level. This point was made very well in this article a few years ago.

Comparing to the Past

The other obvious approach is to compare to the past forecast accuracy data for the company in question. However, it is probably safe to say that some amount of dissatisfaction with the historical forecast accuracy led to the question in the first place. Hence, this approach is also not promising.

Our Approach

Our approach to target forecast accuracy is based on the idea that for a given data set, only so much accuracy is possible given a particular formula of measuring forecast accuracy. An analogy I like to use is this: Given a particular fruit, there is a limit on how much juice one can extract from it. Buying a more powerful extractor can help, but only up to a certain point. Other people describe this as signal versus noise.

With the above in mind, the first step in calculating a meaningful forecast accuracy target is to understand the level at which one would measure the forecast accuracy. That can be a whole discussion. For now, let us assume we are interested in forecast accuracy at the material-location level.

The first step is to run a segmentation analysis to understand what parts of data should be forecasted in which way. This was covered very well by Ken in his blog post, “Time Series Forecasting Basics“.

The next step is to forecast using a simple method. We recommend the following 3.

For relatively stable items, the forecast in this period equals the actual sale of the previous period. This method is also called the random walk method.

For seasonal items, the forecast in this period equals the actual sale of the same period last year. This method is also called the seasonal random walk method. In a monthly setup, this would look as follows. In a weekly setup, it would similarly be t-52.

For all others, one could simply use a 3-period average as the simple method.

All these methods are simple enough to be coded in Excel. Also, most if not all forecasting packages have these methods.

To be clear, at this point, your data set would be divided into three subsets where you would be applying one out of the three different simple methods mentioned above. So, if one has 100 combinations, 27 combinations might be forecasted using the random walk, 33 using the seasonal random walk, and the remaining 40 using the 3-period average.

The next step is to calculate the forecast accuracy achieved from these simple methods. Say it is x%. Now you have the lower bound or the minimum accuracy you should be able to achieve in your data set. As these methods are so easy, they can be programmed in MS Excel.

The next question is this: By deploying a sophisticated forecasting package and process, how much can one increase the forecast accuracy by? Past research has shown that improving accuracy from the simple methods is not an easy task. When done well, 6-12% improvement in accuracy above the simple methods has been observed.

Now, one must apply this to their own situation. One could think along the following lines to pick a number from that range.

  • If forecast accuracy from the simple method is very low (<25%), the bump from a sophisticated forecasting package and process might be limited.
    • Likely reason: Data set is fundamentally unforecastable (too much noise).
  • If forecast accuracy from the simple method is very high (>85%), the bump from a sophisticated forecasting package and process might be limited.
    • Likely reason: Data set is so predictable; the simple methods are already able to extract almost all ‘signal’.
  • If the company in question is relatively immature in the forecasting process, they might take a long time to improve. This will limit their short-term upside in the forecasting accuracy.

So, there you have it. Our approach of setting meaningful forecast accuracy targets. If you have done this differently, please share. If you would like to discuss this further, please contact us.

Enjoyed this post? Subscribe or follow Arkieva on Linkedin, Twitter, and Facebook for blog updates.

Sujit Singh

As COO of Arkieva, Sujit manages the day-to-day operations at Arkieva such as software implementations and customer relationships. He is a recognized subject matter expert in forecasting, S&OP and inventory optimization. Sujit received a Bachelor of Technology degree in Civil Engineering from the Indian Institute of Technology, Kanpur and an M.S. in Transportation Engineering from the University of Massachusetts. Throughout the day don’t be surprised if you find him practicing his cricket technique before a meeting.

Sours: https://blog.arkieva.com/setting-forecast-accuracy-targets/

results matching ""

Category Four: Accuracy

Ensure accuracy by carefully calculating targets to drive precise achievement. This usually occurs when there is a wealth of historical performance data and when leaders expect program managers to leverage descriptive and inferential statistics to set achievable targets that continuously improve performance.

Signs of accuracy:

  • Baseline data exists.
  • There is internal consensus about the validity of the measure.
  • There is internal consensus about the validity of the target.
  • The measure and/or target can be shared internally and with the public.
  • The target is set at least annually and revised on predetermined intervals.
  • The target is always set beyond past performance in order to drive improvement.
  • The target is within reasonable range based on the evidence of past performance.
  • The target is forecasted for the entire performance period.
  • The target is constructed using a combination of descriptive and inferential statistics.
  • The target accounts for contextual variables (e.g., current budget environment, industry trends, policy changes, etc.).
  • People routinely discuss methods to improve target accuracy.
  • There is evidence of a relationship between the target and actual performance
  • Actual performance continues to improve.
Figure 5

How to practice accuracy?

  • Leverage multiple datasets (variables). An accurate target is one that accounts for as many variables as possible that impact the performance measure. For example, when forecasting the number of permits a building department can issue, think carefully about the influence of time, productivity, staffing levels, and permit type on achieving those targets. They likely all contribute to performance, so leveraging data on as many variables as possible will increase the accuracy of target predictions.
  • Use proven data science practices. Predicting the expected performance is the holy grail of setting great performance targets, but doing it well requires more advanced data science techniques. After gathering all of the relevant datasets and conducting data hygiene (i.e., cleaning it up), initiate an exploratory analysis of the data (e.g., identifying measures of central tendency, variability, and dispersion). From there, consider using random sampling to conduct data modeling and statistical inference on the data to build a predictive model.
  • Run multiple what-if scenarios and sensitivity analysis. With an underlying data model, variables can be changed to run multiple scenarios. For example, how sensitive is the target to changes in staff size, case loads, or operating hours. This is a great time to engage senior leadership in defining the inputs to the data model, because it helps secure their buy-in for the final target if they understand the methodology behind its creation.
  • Pick an acceptable error range. No predictive model is perfect, so decide in advance how wrong it is acceptable to be. Error ranges can be statistically calculated and applied to the final target the same way a political polls have “margins of error.”

How not to practice accuracy?

  • Do not get intimidated. Target setting can get advanced quickly, and the data science practices involved can intimidate even the best data enthusiasts. But the barriers to entry have never been lower for those trying to advance their data science and predictive modeling practices. The tools and training are cheaper than ever (many are free and online), and the community of practitioners is rapidly expanding.
  • Do not bend your target toward the actuals. As results come in, it is tempting to refine your targets up or down to match the actuals. This can prevent important learning about the accuracy and inaccuracies in the data model, and ultimately does nothing to inspire performance improvement.
  • Do not refine the targets too frequently. One way to prevent target “bending” is to set them and leave them unchanged, perhaps updating them quarterly (at most). Tinkering with the targets too frequently, in an attempt to be more accurate, opens the door for target manipulation.

Setting Targets vs. Forecasting

There is a subtle but important difference between setting targets and forecasting. Setting a target is defining where to aim. It implies aspiration based on capabilities. It is also intended to inspire improvement - always aiming higher and higher or closer and closer to the bulls eye. Forecasting is different. It is the science of predicting the exact mark most likely to be hit (e.g., where the hurricane will make landfall). As new data comes in about a program or service, revising the forecast is exactly the right thing to do; but resist the urge to revise the target. Instead, focus on understanding the variance among your targets, forecasts, and actuals so you can improve them moving forward.

Sours: https://centerforgov.gitbooks.io/setting-performance-targets-getting-started-guide/content/CategoryFour.html
  1. Sunfish freshwater aquarium
  2. Screw selector wheel
  3. Chappie 3d model
  4. Moto g7 power lineageos

Meeting Redistricting Data Requirements: Accuracy Targets

Privacy lock

Last year, the Census Bureau’s Disclosure Avoidance System (DAS) Team made a number of important improvements to the TopDown Algorithm (TDA) that will be used to protect the privacy of our respondents’ data in the P.L. 94-171 redistricting data product. As we have shown in our most recent set of demonstration data, those algorithmic improvements have substantially improved the accuracy of the resulting data, independent of the selection and allocation of the privacy-loss budget (PLB). As we explained in a recent newsletter, we have recently been turning our attention to optimizing and tuning the parameters of the algorithm to further improve accuracy.

The parameters of the TDA can be varied in a number of ways:  query strategy, allocation of PLB across geographic levels, allocation of PLB across queries, and optimization of geographic post-processing to improve accuracy of the data for “off-spine” geographic entities. Determining the optimal settings for these parameters requires empirically evaluating large numbers of TDA runs against objective accuracy metrics. 

Working with the Redistricting Community to Meet Data Requirements

For the P.L. 94-171 redistricting data product, the principal statutory use cases are the redistricting process and the U.S. Department of Justice’s enforcement of the Voting Rights Act of 1965 (VRA). To facilitate this analysis, the Department of Justice supplied sample redistricting and VRA use cases for the Census Bureau to evaluate against. 

Based on these use cases and additional feedback, we created an accuracy target to ensure that the largest racial or ethnic group in any geography entity with a total population of at least 500 people is accurate to within 5 percentage points of their enumerated value at least 95% of the time.

Because the redistricting and VRA use cases rely on geographic aggregations that cannot be prespecified (e.g., congressional districts that will be drawn after the data are published), for evaluation purposes the DAS Team used three already specified geographic constructs that resemble the size and composition of voting districts that will eventually be drawn: block groups (which are on the TDA geographic spine), census designated places (which are “off-spine”), and a customized set of off-spine entities that distinguished between strong minor civil division states and other states. The customized off-spine entities are similar to census designated places.

Because these accuracy targets are expressed in relative shares of the total population, tuning the TDA for accuracy of the racial/ethnic group’s share also tunes the algorithm for corresponding accuracy of the total population of those geographies. 

The Census Bureau is still evaluating the empirical results of these experimental runs, but we look forward to sharing these results and how they will inform the parameter settings used for our next set of demonstration data in our next newsletter.

2021 Key Dates, Redistricting (P.L. 94-171) Data Product:

By April 30:

  • Census Bureau releases new Privacy-Protected Microdata Files (PPMFs) and Detailed Summary Metrics.
      • Two versions: Candidate strategy run using new PLB and old PLB.

By late May:         

  • Data users submit feedback.

Early June:

  • The Census Bureau’s Data Stewardship Executive Policy (DSEP) Committee makes final determination of PLB, system parameters based on data user feedback for P.L. 94-171.

Late June:      

  • Final DAS production run and quality control analysis begins for P.L. 94-171 data.

By late August:

  • Release 2020 Census P.L. 94-171 data as Legacy Format Summary File*.

September:       

  • Census Bureau releases PPMFs and Detailed Summary Metrics from applying the production version of the DAS to the 2010 Census data.
  • Census Bureau releases production code base for P.L. 94-171 redistricting summary data file and related technical papers.

By September 30:         

  • Release 2020 Census P.L. 94-171 data** and Differential Privacy Handbook.

*   Released via Census Bureau FTP site.

** Released via data.census.gov.

Was this forwarded to you?

Sign up to receive your own copy!

Sign Up!

Useful Links:

Have Suggestions?

Do you have specific questions you'd like us to answer in this newsletter or topics you'd like discussed? Send us an email at [email protected] and let us know!

Contact Us

Sours: https://content.govdelivery.com/accounts/USCENSUS/bulletins/2cb745b
Improve Your Accuracy by Using Laser Sights as a Training Tool - Crimson Trace Shooting Tip

How to Set Meaningful Forecast Accuracy Targets

©2021 Gartner, Inc. and/or its affiliates. All rights reserved. Gartner is a registered trademark of Gartner, Inc. and its affiliates. This publication may not be reproduced or distributed in any form without Gartner’s prior written permission. It consists of the opinions of Gartner’s research organization, which should not be construed as statements of fact. While the information contained in this publication has been obtained from sources believed to be reliable, Gartner disclaims all warranties as to the accuracy, completeness or adequacy of such information. Although Gartner research may address legal and financial issues, Gartner does not provide legal or investment advice and its research should not be construed or used as such. Your access and use of this publication are governed by Gartner’s Usage Policy. Gartner prides itself on its reputation for independence and objectivity. Its research is produced independently by its research organization without input or influence from any third party. For further information, see Guiding Principles on Independence and Objectivity.

Sours: https://www.gartner.com/en/documents/4002584-how-to-set-meaningful-forecast-accuracy-targets

Targets accuracy

.

Acceptable Accuracy - Long-Range Rifle Shooting with Ryan Cleckner

.

You will also like:

.



146 147 148 149 150