Cycles - Decoding the Hidden Rhythm Introduction It's all about cycles Cycles that influence our life here on Earth. Cycles that represent energy flows that influence people’s moods and emotions. Cycles that represent energies originating from outer space. Cycles that manifest themselves in the measurable value of the economy. This publication introduces an approach on how to identify relevant cycles and how to use this information for forecasting. The approach is not entirely new. But the way we use the “cyclic approach” is. Cycles are important. Cycles surround us and influence our daily lives. Many events are cyclical in motion. There is the ebb and the flow of waves and the inhaling and exhaling of humans. Our daily work schedule is determined by the day and night cycles that come with the rotation of the Earth around its own axis. The orbit of the Moon around the Earth causes the tides of the oceans. Cycles also have an impact on women from their teenage years on – the menstruation cycle. Gardeners have long understood the advantages of working with cycles to ensure successful germination of seeds and high-quality harvest. They work in harmony with the cycles to attain the best results, the best crops. We experience four seasons every year, namely the changes in climate, resulting from the rotation of the Earth around the Sun. This seasonal cycle creates the changes in conditions that affect all living beings on Earth. One common cycle based on seasonal conditions is the bird migration, a regular seasonal journey undertaken by many species of birds. You would wait until early spring to plant new seeds to take advantage of the rise in pulsing energy of the warming spring temperatures. Knowing when the sun will rise may not seem like a prediction, because we associate prediction with uncertainty and risk, but it is, nonetheless, a prediction of future events that is highly accurate. These cycles are largely based on the cyclical movements of the Sun and Moon. However, there is strong evidence that other additional energy cycles in the universe influence our life here on Earth. Independent research by the University of California and the University of Kansas has revealed that the rise and fall of species on Earth seems to be driven by the motions of our solar system as it travels through the Milky Way. Some scientists believe that this cosmic force may provide the answer to some of the biggest questions about Earth’s biological history (Schwarzschild, 2007). Finally, cycles have a long history in explaining our behavior on Earth. We can go back a long time in history to recognize that life follows a path of time cycles. We can even find reference to this in the Bible: A Time for Everything There is a time for everything, and a season for every activity under heaven: a time to be born and a time to die,  a time to plant and a time to uproot, a time to kill and a time to heal, a time to tear down and a time to build, a time to weep and a time to laugh, a time to mourn and a time to dance, a time to scatter stones and a time to gather them, a time to embrace and a time to refrain, a time to search and a time to give up,  a time to keep and a time to throw away, a time to tear and a time to mend, a time to be silent and a time to speak, a time to love and a time to hate, a time for war and a time for peace. Ecclesiastes 3:1-8 If there is indeed a time for everything that explains and predicts our behavior, this must also be applicable to people’s economic hopes, which manifest themselves in the value of the stock market. Two well-known pioneers who applied cyclic analysis in the stock market are W.D. Gann and J.M. Hurst. Gann used cyclic and geometric time and price patterns, but did not elaborate the details of his approach. His work is still a mystery to many of us. Hurst was the first to introduce cycle analysis to the technical analysis of the stock market. Even today, a lot of cycle forecasters, like Peter Eliades, successfully use the techniques of Hurst’s approach outlined in his seminal work “The Profit MAGIC of Stock Transaction Timing”. For example, Hurst demonstrated that the only difference between a head and shoulders pattern and a double top pattern is the phasing of the cyclic components. Additionally, a paper published by three authors from the MIT Laboratory for Financial Engineering in 2000 concludes that ''technical patterns do provide information. It does raise the possibility that [pattern] analysis can add value to the investment process.” (Lo; Mamaysky; Wang; 2000) Today we have evidence that detecting patterns adds value to the investment process and that all technical patterns can be rebuilt by means of cyclic components. In this regard, it should be valuable to think in terms of cycles rather than using a framework that consists of static chart patterns. If this is the case and has already been widely acknowledged, why are only a few analysts and investors using cyclic analysis? The likely answer to that question is because cyclic analysis is extremely difficult to put into practice. It requires a great deal of work and some complex mathematics that is not easy for everyone to apply. Additionally, many obstacles exist that hamper the use of cycle analysis: The gap in speech/language between cycle researchers and traders One reason cycle analysis is often limited to scientific researchers is the linguistic barrier. This becomes clear in the following example: “The actual support level identified, coupled with Fibonacci retracement, suggests the presence of strong buying opportunities in the near-term.” “The magnitude of the first six frequency patterns and the statistical significance of the Q-score suggest the presence of a high-frequency predictable component in the stock market.” Even though both statements have the same meaning, most readers will understand the first statement but find the second puzzling. The gap of trading expertise vs. cycles calculation The second gap is attributable to different knowledge areas. Technical analysis is primarily visual while cycle analysis is mostly numerical. The visual mode of technical analysis is one of the few human cognitive activities where computers do not yet have an absolute advantage over us. Numerical analysis involves the study of data sets after the fact. But in real-time environments, traders and investors must decide in the now and their decisions are mainly based on visual pattern recognition from charts. In many cases, the human eye can perform this “signal extraction” quickly and accurately. There are no, or more precisely, only few available cycle tools that can present the visual information extracted from numerical cycle analysis to the trader and function as a visual guide. The gap of forecasting vs. trading The third reason cyclic analysis is something of a rarity in trading is the distinction between forecasting and trading. Most traders are not interested in predicting the future; instead, they enter a trade based on probabilities, apply money management and exit the trade sticking to clear rules. They claim that this is “the real way of trading”. Traders are convinced that they can make money by simply entering the trade randomly and by applying money management and exit rules. On the other end of the spectrum are the “forecasters”. This group of experts is not interested in money management and exit strategies. They solely base their trading on predicting future market behavior. A gap exists in the mindsets of these two groups characterized by an ongoing debate about trading versus forecasting. Cyclic analysis is more of a forecasting method. It is therefore not surprising that this tool cannot be found in an active trader’s toolbox. The active trader is not interested in “forecasting”. He manages his trade. Bridging the Gap This book tries to bridge these gaps with this publication. This book differs from traditional ones on cycle approaches, because it does not deliver a static framework of cycles that data need to be squeezed into. That is, we do not try to make the market “fit” into a particular cycle framework which has at least two different possible outcomes. "Failures" within static frameworks are often explained with a complex set of named exceptions and deviations. The listing of exceptions after a static cycle framework fails (such as: “A cycle inversion took place”) is of little comfort to the investor who has made investments based on one of the delivered predictions. All cycle tools are explained in detail and can easily be dragged ‘n dropped onto the chart via the cycle.tools cloud application; or can be integrated in your own applications via our public API access. Even most source codes for the introduced algorithms are shared as open source code for interested follow up projects on your own.How to detect and measure cycles The pivotal point of the approach described here is a method that can accurately determine which cycle is currently active with regard to the length, amplitude, and duration of the last high and low of a data series. To borrow from the language of engineering, frequency analysis is used to measure cycles. As simple users, however, we should not be deterred by these "technical" terms. Frequency is nothing other than "oscillations (cycles) per time frame". In technical-mathematical analysis, the measurement of frequency is therefore repeatedly described. Time-frequency analysis identifies the point in time at which various signal frequencies are present, usually by calculating a spectrum at regular time intervals. The application of frequency analysis to financial data is in principle nothing new and has already been described in numerous articles. However, current methods often come up against barriers in terms of application in financial markets. This is attributable to the specific features of the financial markets. Financial markets are influenced by numerous overlapping waves, whose strength and phases vary over time and are consequently not constant. The data are also overlaid by significant one-off events (noise) and quasi-linear trends. The classical methods of frequency analysis are not designed for the special characteristics of financial markets. Hence, the established methods are largely unable to provide reliable results as far as practical trading signals are concerned. However, this section is designed for practical application in trading and forecasting and is not intended to be a scientific publication on new algorithms. Against this background, I would like, on the one hand, to abstain from the academic debate about the advantages and disadvantages of individual methods and, on the other, to avoid repeating what has already been said in other publications. By combining special DFT methods (including the Goertzel algorithm), validation by means of statistical measurement methods (including the Bartels Test) and approaches to pre-processing (detrending), this framework provides a reliable method for measuring cycles in financial time series datasets. The proposed method provides the spectrum of frequency analysis for every asset, dataset and every possible time frame. The following results are thereby provided: Presenting a visual spectrum of the wave analysis of a length of 5 - 400 bars; Determining the peaks in the spectrum analysis - i.e., the relevant and significant cycles; Filtering of the values derived from the frequency analysis through statistical validation, i.e., identifying the cycles that are actually "active"; Determining the precise phase and amplitude of every active cycle; Output of the data in a form comprehensible to traders, i.e., the phase in the form of the date of the last low point the amplitude in the form of the current price-scale, and the length of the wave in the form of the number of bars on the chart; Determining the "strength" of a cycle by establishing the price movement per bar ("cycle strength"). In classical cycle analysis, the waves with the largest amplitude are usually described as dominant. However, the relative influence of a cycle per time unit - i.e., per bar on the chart - is of much greater interest. Therefore, the so-called cycle strength is ultimately introduced here and used as a measurement value for the cycle with the greatest influence per price bar. The value with the highest cycle strength will be used again later as representing the "Dominant Cycle". These results and the mathematical method alone would fill an entire book on their own. As this publication is designed for practical purposes and aims to advance the method’s successful application in cycle analysis, this book is structured in the following main chapters: Cycles Explained - To introduce basic parameters and knowledge Applications and Examples - To illustrate the analytical algorithm Scanner Framework - To explain how the algorithm is designed Real World Examples - To see how it works in live situations Cycles Explained Introduction to Cycles Cycle Analysis Explained Why are cycles so important? Our daily work schedule is determined by the day and night cycles that come with the rotation of the Earth around its own axis. The orbit of the Moon around the Earth causes the tides of the oceans. Gardeners have long understood the advantages of working with cycles to ensure successful germination of seeds and high-quality harvest. They work in harmony with the cycles to attain the best results, the best crops. These are just a few cycles with recurring, dominant conditions that affect all living beings on Earth. So if we are able to recognize the current dominant cycle, we are able to project and predict behavior into the future. Let us start with a simple example to illustrate the power of today’s digital signal processing. Weather Figure 1 shows the daily outdoor temperature in Hamburg, Germany (blue). This raw data was fed into a digital signal processor to derive the current underlying dominant cycle and project this cycle into the future (purple). Figure 1: Local temperature Hamburg Germany, Dominant Cycle Length: 359 days, Source: https://cycle.tools (Jan. 2020) The cycle detection analyzed the dataset and provides us with useful information about the underlying active cycles in this dataset. For the daytime temperature this may be obvious to any eye anyway, but should serve as an introductory example. There are three important structures that have to be identified by any cycle detection engine: Which cycles are active in this data set? How long are the active dominant cycles? Where is the high/low of these cycles aligned on the time scale? Any cycle detection algorithm must output this information from the analyzed raw data set. In our case, the information is displayed on the right side of the graph as a “cycle list”. How to sort the active cycles? The table view at the right part of Figure 1 provides an answer, namely which cycles are currently active, with the most active cycle being plotted at the top of the list. We can identify the most dominant cycle by its amplitude relative to the other detected cycles. In this case there is only one dominant cycle with an amplitude of 13, followed by the next relevant cycle with an amplitude of about 3. The second cycle’s relative size is too small compared to the first to play an important role. We could therefore skip each of the “smaller” cycles in terms of their amplitude compared to the highest ranking cycle with an amplitude of 13. What is the length of active cycle? To answer the second question, the “lengths” information is displayed. For the highest-ranking cycle we see the length of 359 days. This is nothing other than the annual seasonal cycle for a location in Central Europe. But you can see that the recognition algorithm does actually not know that it is “weather”, but is capable of extracting the length from the raw data for us. Where are we in that cycle? After all, the cycle status is the third important piece of information we need: When have the ups and downs of the cycle with a length of 358 days occurred in the past? In technical terminology, this is called the current phase of the dominant cycle. It is represented by the recorded overlay cycle in which the highs and lows are shown in the output as purple line: The low occurs in January/February, while the highs take place in July of that year. Using these 3 pieces of information about the dominant cycle, we can start with a prediction: We would expect the next low in early February and the next high in July. The dominant cycle is extended into the future. Well, this is an overly simple example, but it shows that identifying and forecasting cycles will provide useful information for future planning. So that’s the trump card: Any cycle detection algorithm must provide information about What are the currently active dominant cycles? How long is the active cycle? When are past and future highs / lows? Sentiment data Lets continue and apply this algorithm to the financial data set. Similar to the weather cycle, sentiment cycles are often the driving force behind ups and downs in the major markets. Understanding the sentiment cycles in financial stress is critical to generating returns in the current market environment. Sentiment cycles influence the movement of financial markets and are directly related to people’s moods. Getting a handle on sentiment cycles in the market would substantially improve one’s trading ability. Figure 2 shows the same technique applied to the VIX index, also called the “fear index”. The blue plot is the raw data of the daily VIX data at the time of today’s writing. The detected dominant cycle is shown as an overlay with its length and its phase/time alignment, making it possible to draw the mood cycle into the future. Figure 2: VIX Cycle, Dominant Cycle Length: 180 bars, Source: https://cycle.tools (13. Feb. 2020) The reading of the VIX sentiment cycles is somewhat different when applied to stock market behavior: Data lows show windows of high confidence in the market and low fear of market participants, which in most cases refer to market highs of stocks and indices. On the other hand, data highs represent a state of high anxiety, which occurs in extreme forms at market lows in particular. Reading the cycle in this way, one would predict a market high that will happen in the current period at the end of December 2019/beginning of 2020 and an expected market low that, according to the VIX cycles, could occur somewhere around April. Similar to the identification and forecasting of weather/temperature cycles, we can now identify and predict sentiment cycles. In terms of trading, one should never follow a purely static cycle forecast. The cycle-in-cycles approach should be used to cross-validate different related markets for the underlying active dominant cycles. If these related markets have cyclical synchronicity, the probability for successful trading strategies increases. Global stock markets Figure 3 now shows the same method applied to the S&P500 stock market index. The underlying detected cycle has a length of 173 bars and indicates a cycle high at the current time. This predicts a downward trend of the dominant cycle until mid 2020. Figure 3: SP500, Dominant Cycle Length: 173 bars, Source: https://cycle.tools API / NT8 (13. Feb. 2020) We have now discovered two linked cycles, a sentiment cycle with a length of about 180 bars, which indicates a low stress level in December 2019 with an indicative rising anxiety level. And a dominant S&P500 cycle, which leads a market with an expected downtrend until summer 2020. Both cycles are synchronous and parallel in length, timing and direction. This is the key information of a cycle analysis: Synchronous cycles in different data sets that could indicate a trend reversal for the market under investigation. Well, we can go one step further now. Since more and more dominant cycles are active, you should also look at the 2-3 dominant cycles in a composite cycle diagram. A composite cycle forecast Figure 4 shows this idea when analyzing the active cycles for the Amazon stock price. See the list on the right for the current active cycles identified. The most interesting ones have been marked with the length of 169 bars and 70 bars. It is possible to select the most important ones based on the cycle Strength information and the Bartels Score. Not going into the details for these two mathematical parameters, let us simply select the two highest ones for the example. Now, instead of simply drawing the dominant cycle into the future, instead we use both selected dominant cycles for the overlay composite cycle display (purple) which is also extended in the unknown future. Figure 4: Amazon Stock, Dominant Cycle Length: 169  & 70 bars, Source: https://cycle.tools (04. Feb. 2020) The purple line shows the cycles with the length of 169 and 70 as well as their detected phase and time alignment in a composite representation. A composite plot is a summary of the detected cycles adding their phase and amplitude at a given time. One can see how well these two cycles in particular can explain the most important stock price movements at Amazon in the last 2 years. It is interesting to note in this case that a similar composite cycle in Amazon stock price, as previously shown by Sentiment and the S&P500 index cycle, indicates a cyclical downtrend from January to summer 2020. By combining different data sets and the analysis of the dominant cycle, we can detect a cyclical synchronicity between different markets and their dominant cycles. The mathematical parameters of cycles allow us to project a kind of “window into the future”. With a projection of the next expected main turning points of the cycle or composite cycle. This information is valuable when it comes to trading and trading techniques.  Especially when you are able to identify similar dominant cycles and composite cycles in related markets which are “in-sync”. The examples used have been kept simple and fairly static to show the basic use for cycle detection and prediction. The projections obtained must be updated with each new data point. It is therefore essential to not only perform this analysis once, statically, but to update it with each new data point. Knowing how to use cyclical analysis should be part of any serious trading approach and can increase the probability of successful strategies. Because if a rhythmic oscillation is fairly regular and lasts for a sufficiently long time, it cannot be the result of chance. And the more predictable it becomes. There is often a lack of simple, user-friendly applications to put this theory into practice. We have to work on spreading this knowledge and its application. Instead of the scientific-mathematical deepening of algorithms. This theory can be applied to any change on our earth as well as to any change of human beings in order to understand their nature and predictable behavior. Background Information How does the shown approach work used in these examples? The technique applied is based on a digital processing algorithm that does all the hard work and maths to derive the dominant cycle in a way that is useful for the non-technical user. More information on the cycle scanner framework used for these examples can be found in this chapter. Cycle Parameters Explained The following chart summarizes all relevant parameters related to a "perfect" sinewave cyle: What is Frequency? Frequency is the number of times a specified event occurs within a specified time interval.Example:5 cycles in 1 second= 5 Hz 1 cycle in 16 days = 0.0625 cycles/day = 723 nHz What is Strength? Strength is the relative amplitude of a given cycle per time interval. (“amplitude per bar”).Example: A = 213 , d = 16, s = 13.2 per d Read more on Cycle Strength in how to "Rank" cycles here. What is Bartels Score? The Bartels score provides a direct measure of the likelihood that a given cycle is genuine and not random. It measures the stability of the amplitude and phase of each cycle.Formula: B score %= (1-Bartels Value)*100 Range: 0 % : cycle influenced by random events, not significant100 %: cycle is significant / genuine Read more on how to validate cycles with the Bartels score here Dynamic Nature of Cycles Cycles are not static Dominant Cycles morph over time because of the nature of inner parameters of length and phase. Active Dominant Cycles do not abruptly jump from one length (e.g., 50) to another (e.g., 120). Typically, one dominant cycle will remain active for a longer period and vary around the core parameters. The “genes” of the cycle in terms of length, phase, and amplitude are not fixed and will morph around the dominant mean parameters. The assumption that cycles are static over time is misleading for forecasting and cycle prediction purposes. These periodic motions abound both in nature and the man-made world. Examples include a heartbeat or the cyclic movements of planets. Although many real motions are intrinsically repeated, few are perfectly periodic. For example, a walker's stride frequency may vary, and a heart may beat slower or faster. Once an individual is in a dominant state (such as sitting to write a book), the heartbeat cycle will stabilize at an approximate rate of 85 bpm. However, the exact cycle will not stay static at 85 bpm but will vary +/- 10%. The variance is not considered a new heartbeat cycle at 87 bpm or 83 bpm, but is considered the same dominant, active vibration. This pattern can be observed in the environment in addition to mathematical equations. Real cyclic motions are not perfectly even; the period varies slightly from one cycle to the next because of changing physical environmental factors. Steve Puetz, a well known cycle researcher, calles this “Period variability“: “Period variability – Many natural cycles exhibit considerable variation between repetitions. For instance, the sunspot cycle has an average period of ∼10.75-yr. However, over the past 300 years, individual cycles varied from 9-yr to 14-yr. Many other natural cycles exhibit similar variation around mean periods.” Puetz (2014): in Chaos, Solitons & Fractals This dynamic behavior is also valid for most data-series which are based on real-world cycles.However, anticipating current values for length and cycle offset in real time is crucial to identifying the next turn. It requires an awareness of the active dominant cycle parameter and requires the ability to verify and track the real current status and dynamic variations that facilitate projection of the next significant event. Figures 1 to 3 provide a step-by-step illustration of these effects. The illustrations show a grey static cycle. The variation dynamic in the cycle is represented by the red one with parameters that morph slightly over time. The marked points A to D represent the deviation between the ideal static and the dynamic cycle. Effect A: Shifts in Cycle Length The first effect is contraction and extraction of cycles, or the “cycle breath.” Possible cycles are detected from the available data on the left side of the chart. Points A and B show an acceptable fit between both cycles. However, the red dynamic cycle has a greater parameter length. The past data reveal that this is not significant, and there is a good fit for the theoretical static and the dynamic cycle at point A and B. Unfortunately, the future projection area on the right side of the chart where trading takes place reflects an increasing deviation between the static and dynamic cycle. The difference between the static and dynamic cycle at points C and D is now relatively high. The real “dynamic” cycle has a parameter with a slightly greater length. The consequence is that future deviations increase even when the deviations between the theoretical and real cycle are not visible in the area of analysis. These differences are crucial for trading. As trading occurs on the right side of the chart, the core parameters now and for the next expected cycle turn must be detected. A perfect fit of past data or a two-year projection is not a concern. The priority is the here and now, not a mathematical fit with the past. Current market turns must be in sync with the dynamic cycle to detect the next turn. Therefore, just as an individual heartbeat cycle approximates a core number, the cycle length will vary around the dominant parameter +/- 5%. Following only the theoretical static cycle will not provide information concerning the next anticipated turning points. However, this is not the only effect. Animated Video - Length Shifts: Effect B: Shifts in Cycle Phase The next effect is “offset shifts.” In this case, the cycle length parameter is the same between the static theoretical and the dynamic cycle. The dynamic cycle at point A presents a slight offset shift at the top. In mathematical terms, the phase parameter has morphed. This effect remains fixed into the future. A static deviation is observed between the highs and the lows. Although this is not a one-time effect, the phase of the dominant cycle will also change continuously by +/- 5% around the core dominant parameters. Animated Video - Phase Shifts: The Combined Effects In practice, both effects occur in parallel and change continuously around the core dominant parameters. Figure 3 presents a snapshot of both effects with the theoretical and the dynamic cycle. The deviation in the projection area at points C and D shows that just following the static theoretical cycle will rapidly become worthless. The deviation is to the extent that, at point D, a cycle high is expected for the theoretical static cycle (grey) while the real dynamic cycle (red) remains low at point D. These two effects occur in a continuous manner. Although the alignment in the past (points A and B) appear acceptable between the static and dynamic cycle, the deviation in the projection area (points C and D) is so high that trading the static cycle will lead to failure. Animated Video - Combined Effects: A cycle forecasting example incorporating these effects explains the consequences on the right side of the chart. We check the following two examples named “A” and “B”. The price chart is the same for both examples and is represented by a black line on the chart. In both examples, a dominant cycle is detected (red cycle plot) and the price is plotted. In both examples, two variations of the same dominant cycle are detected. The tops and lows show alignment with the price data and two cycle tops and two cycle lows align. This implies that the same dominant cycle is active in both charts. There is one core dominant cycle and the two detected cycles are variations of this same dominant cycle. Therefore, from an analytical perspective view, both cycles could be considered valid from observations of the available dataset. The effects reveal that although past data deviation is convincing, it can significantly impact the projection area. We examine the projection of both cycles. We observe two contrasting projections. Example A shows a bottoming cycle with a projected upturn to a future top. Example B shows the opposite, a topping out cycle with an expected future downturn. While we can detect a dominant cycle on the left area of the chart, the detailed dynamic parameters are the significant differentiators and are crucial to a valid and credible projection. Classic static cycle projections often fail for this reason. Detecting the active dominant cycle represents one part of the process. The second part is to consider the current dynamic parameters with respect to the length and phase of the second part. Although the perfect fit of a cycle within the distant past between price and a static cycle might appear convincing from a mathematical perspective, it is misleading because it ignores the dynamic cycle components. Doing so simplifies the math, but is of no value for trading on the right side of the chart. The examination of past perfect fit static cycles is not necessary. The observance of two to five significant correlations of tops and lows, AND the consideration of current dynamic component updates will yield valid trading cycle projections. This example underpins the significance of an approach that combines a dominant cycle detection engine with a dynamic component update. These two effects occur in a continuous manner. Although the alignment in the past (points A and B) appear acceptable between the static and dynamic cycle, the deviation in the projection area (points C and D) is so high that trading the static cycle will lead to failure: Video Lesson – Dynamic Cycles Explained The following video illustrates the two effects in action (6min.) Cycle Playbook Please click on the chart below to work with the example on desmos: Direkt Link: https://www.desmos.com/calculator/qoktq3jbohAsymmetric Business Cycles and Skew Factors Preface: Cycle analysis and cycle forecasting often imply the use of a symmetric time distribution between high to low and low to high. This is the underlying framework used by anyone applying mathematical signal processing to cycles and producing cycle-based composite cycle forecasts. This technique is now faced with a new challenge that has emerged over the past 30 years based on financial regulations impacting today’s economic business cycle. The following article will highlight the situation and present the reader with a proposed skew factor to account for this behavior in cycle forecasting models. Business cycles are a fundamental concept in macroeconomics. The economy has been characterized by an increasingly negative cyclical asymmetry over the last three decades. Studies show that recessions have become relatively more severe, while recoveries have become smoother, as recently highlighted by Fatas and Mihov. Finally, recessive episodes have become less frequent, suggesting longer expansions. As a result, booms are increasingly smoother and longer-lasting than recessions. These characteristics have led to an increasingly negative distortion of the business cycle in recent decades. Extensive literature has examined in detail the statistical properties of this empirical regularity and confirmed that the extent of contractions tends to be sharper and faster than that of expansions. In a paper published in the American Economic Journal on Jan. 2020, Jensen et al. summarized: Booms become progressively smoother and more prolonged than busts. Finally, in line with recent empirical evidence, financially driven expansions lead to deeper contractions, as compared with equally sized nonfinancial expansions. When recessions become faster and more severe and recoveries softer and longer, standard symmetric cycle models are doomed to fail. This new pattern challenges the existing standard, symmetrical, 2-phase cycle models. Since 2-phase cycle models are based on a time-symmetric distribution of dominant cycles with mathematical sine-based counting modes from low to low or high to high. However, these models lose their forecasting ability under the assumption that a uniform distribution from high to low and low to high is no longer given. A new model is needed. A dynamic skew cycle model that includes a skew factor. Before introducing a new mathematical model to account for the asymmetric behavior, the cycle difference will be visualized and compared with some diagrams. The following illustration shows a classical, symmetrical 2-phase cycle on the left (green) and an asymmetric 3-phase cycle is highlighted on the right (red). Asymmetric Cycle Model This following model shown in Chart 1 uses a simplified formula that allows different distortions of the phases with a skew factor, but also keeps the length of the whole cycle, from peak to peak, the same without distortion. Chart 1: Comparing 2-phase symmetric (green) and 3-phase asymmetric cycle models (red) The new “skew factor” used in the red model shows that the upswing phase is twice as long as the recession, while ensuring the same total duration and amplitude of the standard, 2-phase cycle model (green, left). This allows us to model identified cycle lengths and strengths in the 3-phase model (red, right). So, if we add the “skew factor” to the traditional mathematical cycle algorithms, we get cycle models that consider the asymmetric changes mentioned above. And thus, the cycle models can be used again for forecasts. Example: The skew factor on the S&P 500 index The next chart 2 shows a detected dominant, symmetric cycle with a length of 175 bars in January 2020 for the S&P 500 index. The light blue price data were not known to the cycle detection algorithm and represent the forecast out-of-sample range. The cycle is shown as a pink overlay. This symmetrical cycle forecast predicts that the peak would occur as early as the end of 2019, and a new low for this cycle to occur in May. Chart 2: S&P 500 with 175 day symmetric cycle, skew factor: 0.0, date of analysis: 16. Jan 2020 As can be seen, the predicted high was too early than the real market top, and the predicted low was too late compared to the market low. This is a common observation when using symmetric cycle models in today’s markets. On the one hand, the analyst can now anticipate, based on knowledge of asymmetric variation, that the predicted high will be too early and the plotted low too late. However, additional knowledge of the analyst is required without being represented in the model. A better approach would be to include this knowledge already in the modeling of the cycle projection. Therefore, we now add the skew factor to the detected cycle analysis approach. In the next graph (Chart 3), a skew factor to the same 175-day cycle is applied. The date of analysis is still January 16, and the light blue is the prediction out-of-sample period. Here the asymmetric cycle forecast projects the peak for late January and the low for March 2020. The real price followed this asymmetric cycle projection more accurately. Chart 3: S&P 500 with 175 day asymmetric cycle, skew factor: 0.4, date of analysis: 16. Jan 2020 This example demonstrates the importance to adapt traditional cycle prediction models with the addition of a skew factor. The introduction of a skew factor is based on the current scientific knowledge of the changed, asymmetric business cycle behavior. The next paragraph explains how this asymmetry can be applied to existing, mathematical cycle models by introducing the skew factor formula. Cycle Skew The skew factor allows the representation of an asymmetric shape for business cycles in a cyclic model, as shown in the following examples. The green cycle is a standard sine-wave cycle (skew=0.0); the red cycle applies a specific skew factor. Examples skew = 0.5 skew = 0.75 skew = -0.5 skew = -0.75 (a+cosx)cosn+bsinxsinn(a+cosx)2+(bsinx)2 Desmos interactive playbook: https://www.desmos.com/calculator/ejq06faf93 Math & Code Equation To apply the cycle skew, the skewed sine wave equation is introduced instead of a pure sine wave, sin(x), formula: Where: skew = skew factor [Range: -1 ... +1]x = phase (rad) Math LaTeX code SineSkewed({\color{DarkGreen} x}, {\color{Blue} s_{kew}})=\frac{\sin {\color{DarkGreen} x}}{\sqrt{({\color{Blue} s_{kew}}+\cos {\color{DarkGreen} x})^2+(\sin {\color{DarkGreen} x})^2}} .NET C# code Skewed sine wave function double MathSineSkewed(double x, double skew) { double skewedCycle = Math.Sin(x) / Math.Sqrt(Math.Pow((skew + Math.Cos(x)), 2) + Math.Pow((Math.Sin(x)), 2)); return skewedCycle; } How to use in a cycle forecast algorithm A common approach is to build cycle prediction models based on detected or predefined values for cycle length, phase, and amplitude. These models use a standard sine wave function to create a cycle forecast or composite forecast projection. To retain these models and not recreate an existing model, the proposal presented in this paper is to simply replace the existing standard sine wave function with the new skewed sine wave function. Thus, any cycle prediction algorithm can remain as is and use the detected cycles with length, amplitude, and phase as input parameters. At the same time, the projection function is replaced with the new skewed sine function instead of the standard sine function. The main features of this function in brief: It is designed as a drop-in replacement for existing sine or cosine functions used for cycle prediction. It is not necessary to adjust the existing overall model. Simply use this function as a drop-in replacement in an existing algorithm. A skew factor of 0.3-0.4 should be used to fit the model according to current scientific evidence on the asymmetry of the business cycle. The cycle will not be skewed. The length is preserved. Thus, the top-to-bottom and bottom-to-top cycle counts are preserved and are not distorted. The amplitude will not be distorted either. In this way, it is a safe replacement, with the main cycle parameters of length and amplitude remaining intact. Summary The current scientific literature shows the increasingly asymmetric behavior of economic cycles. Explanations can be found in the changing behavior of the financial systems in the US and G7 countries. Against this background, previous 2-phase symmetric cycle models need to be adjusted. The demonstrated approach of introducing an additional skew factor into existing sinusoidal models can help to better adapt cycle-based forecasts to this situation. Further Reading Salgado et al. (2020): Skewed Business Cycles Morley, Piger (2012): The Asymmetric Business Cycle Jensen et al., (2020): Leverage and Deepening Business Cycle Skewness Fatas, Mihov (2013): Recoveries As published in "Cycles Magazine": This article was published in the CYCLES MAGAZINE, Jan. 2021. The Official Journal of the Foundation for the Study of Cycles. Vol. 48 No2 2021. Page 80ff.  (Source Link: https://journal.cycles.org/Issues/Vol48-No2-2021/index.html?page=80 ) ​​ ​(a+cosx)cosn+bsinxsinn(a+cosx)2+(bsinx)2Hurst Nominal Cycle Model Hurst's cycle theory states that the movement of financial market prices is the result of the combination of harmonically related cycles. Hurst recommended a collection of 11 cycles for daily analysis. These cycles are referred to as the Nominal Model. He did extensive research and discovered 11 cycles ranging from 5 days to 18 years found in a large number of stocks. He published the average wavelength of each of these cycles in his Cycle Course. Nominal cyclical model is actually the full name, but is simplified to Nominal Model for simplicity. Thus, for example, what we are often referring to is the nominal 20-week cycle, which means that we are discussing the cycle to which the name 20 weeks has been given. The cycle is not really 20 weeks long due to the dynamic nature of cycles. Cycles are not perfectly even. Cycles vary around their length, phase and amplitude over time. Therefore the use of additional techniques to deal with the principle of cycle variation, like Digital Signal Processing, are required. The Hurst Nominal Cycle Model Name (Nominal Cycle) Average length Average length (days) 5 day 4.3 days 4.3 days 10 day 8.5 days 8.5 days 20 day 17 days 17 days 40 day 34.1 days 34.1 days 80 day 68.2 days 68.2 days 20 week 19.48 weeks 136.4 days 40 week 38.97 weeks 272.8 days 18 month 17.93 months 545.6 days 54 month 53.77 months 1636.8 days 9 year 8.96 years 3272.6  days 18 year 17.93 years 6547.2 days Source: Hurst Cycles Course Cycle Scanner Framework Framework Overview The following figure outlines how the Cycle Scanner algorithm works. The following pages describe each step of this algorithm. Detrending The algorithm has a dynamic filter for de-trending that is required for data preprocessing. Detrending ensures that the data under consideration is not affected by trends or one-time events. The extraction of linear trends in time series data is a required precondition for successful cycle research. In the business cycle literature, the Hodrick and Prescott (1980) filter (HP filter) has become the standard method for removing long-run movements, like trends, from the data. Hodrick and Prescott proposed the HP filter to decompose macroeconomic time series data into cycle and trend components. The HP filter assumes that movements in time series include a smooth and slowly changing trend component. By removing this trend component from the data series, the filter delivers the pure underlying cyclic behavior. Visually, this de-trending technique is like drawing a smooth linear freehand trend line through the plotted chart data and extracting this "freehand" trend line from the full data set. The resulting component is only based on the cyclic behavior without the underlying trend. Now, we can proceed and start to apply additional cycle analysis in the next step to detect the cycles that are dominant and genuine within this filtered data set. However, we must carefully treat the output obtained from this pure mechanical detrending algorithm because it is well-known that this technique may generate spurious cycle variants; that is, the HP filter can generate cycle dynamics even if none are present in the original data. Hence, the presence of cycles in HP-filtered data does not imply that real cycles exist in the original data. Therefore, we need to apply additional mechanisms to validate genuine identified cycles afterward and to remove possible "invalid" cycles. Later, at step 3 of our Cycle Scanner framework, we will show how to circumvent this problem by including goodness-of-fit statistics for our genuine dominant cycle filtering. [[1], [2] ] To optimize the HP filter and to keep these shortcomings of spurious cycles as small as possible, first the proper adjustment of parameter "λ" in the decomposition of the HP filter is important.[[3]] Second, additional testing on how the estimated cyclical components behave based on cross-correlation evaluations are needed to differentiate “genuine” cycles from “spurious” ones. Both adjustments have been incorporated into our Cycle Scanner framework to compensate for the drawbacks of the HP filter. A review of the critical discussions on the HP filter method, however, indicates that the HP filter is likely to remain the standard method for detrending for still a long time to come. Ravn and Uhlig concluded in 1997 as follows: None of the shortcomings and undesirable properties are particularly compelling: the HP filter has withstood the test of the time and the fire of discussion remarkably well. To further optimize the detrending preprocessing, additional recent findings based on the work of Jim Hamilton (2016) might be considered. [[4]] However, the HP filter has broad support in the scientific area, and is widely used. We have been able to successfully use the approach for years in cycle forecasting: Never change a running system too fast. Therefore, we strongly recommend that anyone who wants to rebuild a similar or more optimized detrending framework should conduct further research in this area. References [1] Cogley, T., Nason, J. (1992): "Effects of the Hodrick-Prescott filter on trend and difference stationary time series: Implications for business cycle research," Journal of Economic Dynamics and Control. [2] "Hodrick-Prescott Filter in Practice," Source: http://www.depeco.econo.unlp.edu.ar/jemi/1999/trabajo01.pdf [3] Ravn, M., Uhlig, H. (1997): “On adjusting the HP-Filter for the Frequency of Observations.” [4] James D. Hamilton (2016): “Why You Should Never Use the Hodrick-Prescott Filter,” Department of Economics, UC San Diego. Source http://econweb.ucsd.edu/~jhamilto/hp.pdfCycle Detection As we now have the cyclical data set prepared, the next step is to discover the individual cycles that are active. Subsequently, the engine needs to perform a spectral analysis and then isolate those cycles that are repetitive and have the largest amplitudes. For that, we need to decide on a cycle detection algorithm that suits our goal. Most cycle researchers are familiar with the fast Fourier transform (FFT) and many "FFT-based engines" are available to detect one or more cycles in data sets. What many do not know, however, is that there is a special subset: the Goertzel algorithm [1] [2]. Originally, the algorithm was used to detect "dominant" tone frequencies used in landline phones for DTMF signaling, which was originally developed in 1958, long before the period of smartphones. Have you ever thought about how the telephone exchange knows what button has been pressed? The answer is the Goertzel algorithm. Today, the Goertzel algorithm is used extensively in communications for tone detection and is built into hardware as integrated circuits to detect tones of a button pushed in near-real-time. Additionally, and even more important, the Goertzel algorithm was originally designed to detect cycles in data sets that have similar characteristics to contemporary financial series data. The problem a long time ago was that a special tone needed to be detected in a very short amount of available data and with considerable noise. This is similar to the problem of determining dominant cycles in financial data sets observed today. Therefore, instead of using standard Fourier or wavelet transforms, why not use a well-established variant of the discrete Fourier transform (DFT): the Goertzel algorithm? As our requirements for cycle detection in financial markets are similar to the ones Goertzel was addressing in the case of old phone lines? Our research shows that the Goertzel algorithm delivers reliable results in decoding dominant cycles out of detrended financial data sets, outperforming other methods such as wavelets or MESA. For sure, you need to apply the Goertzel DFT (GDFT) in a special way, as you need to apply a GDFT test on all possible wavelengths and use different methods to obtain the current phase and amplitude. That is, for covering a full cycle length spectrum, the Goertzel algorithm has a higher complexity than FFT algorithms. Nevertheless, using the Goertzel algorithm to obtain the dominant cycle length out of short and noisy data, along with standard versions to obtain the related current phase and amplitude for the detected cycle length, helps generate all dynamic cycle data for the active cycle at the last point of our data set under consideration. As we are not interested in the “averaged” cycle length for longer data sets, we want the cycle length and phase that are active on the last bar of the chart. Therefore, this combination of the Goertzel algorithm as the core, with additional analysis to obtain the current phase of the cycle at the end of the data set, is used. Finally, this approach is supported by a study conducted by Dennis Meyers (2003) on the Goertzel method: [3] With very noisy data where the noise strength is greater than the signal strength, [...], only the Goertzel Algorithm can successfully identify the frequencies present. In addition, we can see an increasing amount of noise coming into play for financial markets. Some examples of this are high-frequency trading, pure algo-based trading engines, or alternative news. So, in our real-life environments, we will not see a "clean" financial data set as it is diluted by noise that hides the real underlying cycles. The Meyers study shows that the GDFT even outperforms the proposed method "MESA" used by John F. Ehlers in most of his cycle research. Therefore, our cycle scanner framework applies the GDFT to the detrended data set to cover all possible cycle lengths. Once the most active cycle is detected based on the full spectrum GDFT analysis, we use an additional run to check for the least current phase status on the last bar of our data set with a shorter subset of the original full data set, as we are interested in the status of the detected cycle length at the point of the analysis, or the last bar available. References: [1] Goertzel: https://www.mstarlabs.com/dsp/goertzel/goertzel.html [2] Goertzel: https://courses.cs.washington.edu/courses/cse466/12au/calendar/Goertzel-EETimes.pdf [3] Source: http://meyersanalytics.com/publications2/MesaVsGDFT.pdfCycle Validation After finishing cycle analysis, we will get a list of detected active cycles. The cycle detection algorithm gave us a list of cycles with length, amplitude, and current phase status at the end of the data set. Now, we need to validate and rank these cycles as our approach is looking for the most active cycles out of this list. Before we start to do some ranking, let us get back to what we introduced already in first detrending step: Based on the pitfalls of the HP-filter, we need to cross-check the detected cycles by using a second algorithm to validate if a cycle is genuine or perhaps spurious. So, this step is important to avoid getting "virtual" cycles that are not in the original data set and have just been returned by the detrending algorithm itself. Therefore, we apply a special form of statistical correlation analysis for each detected cycle length. During this step of cycle validation, the statistical reliability of each cycle is evaluated. The goal of the algorithm is to exclude cycles that have been influenced by one-time random events (news, for example) and cycles that are not genuine. One of the algorithms used for this purpose is a more sophisticated Bartels Test. The test builds on detailed mathematics (statistics) and measures the stability of the amplitude and phase of each cycle. Bartels’ statistical test for periodicity, published at the Carnegie Institution of Washington in 1932, was embraced by the Foundation for the Study of Cycles decades ago as the single best test for a given cycle's projected reliability, robustness, and consequently, usefulness. It was originally published in 1935 by Julius Bartels in Volume 40 No. 1 of the scientific magazine "Terrestrial Magnetism and Atmospheric Electricity" with the title "Random fluctuations, Persistence, and quasi-persistence in geophysical and cosmical periodicities." Later, Charles E. Armstrong gave a brief example and case study on how to apply the Bartels test in financial time series data in 1973, titled "Applying the Bartels Test of Significance to a Time Series Cycle Analysis." [1] The Bartels test returns a value that gives the measure of the likelihood of genuineness of a cycle: values range from 0 up to 1, and the lower the value, the less likely is that this cycle is due to chance, or random. The test considers both the consistency and the persistence of a given cycle within the data set it is applied to. To make it more human readable as we are looking for an easily readable indication if the cycle is genuine, we just convert the raw Bartels value into a percentage that indicates how likely the cycle is genuine by using the conversion formula: Cycle Score Genuine % = (1 - Bartels Score) * 100 It gives us a value between 0% (random) and 100% (genuine). This test helps us now to filter out possible cycles that might have been detected in the cycle detection step (Step 2), but had only been in the data series for a short or random period and should therefore not be considered as dominant cycles in the underlying original data series. As we have a final percentage score, we just need to define an individual threshold below which the cycles should be skipped. We recommend using a threshold of >49%  and hence cycles with a Bartels genuine percentage value below 49% should be skipped by any cycle forecasting or analysis techniques that follow. Further Reference: [1] C. E. Armstrong, Cycles Magazine, October 1973, p. 231ff, "Part 25: Testing Cycles for Statistical Significance"  (see pdf attachment)Ranking An important final step in making sense of the cyclic information is to establish a measurement for the strength of a cycle. Once ranking and sorting for detected cycles is completed, we have cycles that are dominant (based on their amplitude) and genuine (considering their driving force in the financial market). For trading purposes, this does not suffice. The price influence of a cycle per bar on the trading chart is the most crucial information. Let me give you some examples by comparing two cycles. One cycle has a wavelength of 110 bars and an amplitude of 300. The other cycle has a wavelength of 60 bars and an amplitude of only 200. So, if we apply the “standard” method for determining the dominant cycle, namely selecting the cycle with the highest amplitude, we would select the cycle with the wavelength of 110 and the amplitude of 300. But let us look at the following information - the force of the cycle per bar: Length 110 / Amplitude 300   = Strength per bar: 300 / 110 = 2.7 Length 60 / Amplitude 200 = Strength per bar: 200 / 60 = 3.3 For trading, it is more important to know which cycle has the biggest influence to drive the price per bar, and not only which cycle has the highest amplitude! That is the reason I am introducing the measurement value “Cycle Strength.” The Cycle Scanner automatically calculates this value. That said, to build a ranking based on the cycles left, we recommend sorting these cycles based on their "influence" per bar. As we are looking for the most dominant cycles, these are the cycles that influence the movement of the data-series the most per single bar. Sort the outcome according to the calculated cycle strength score. Now we have a top-to-bottom list of cycles having the highest influence on price movements per bar. What is the dominant cycle? After the cycle scanner engine has completed all steps (detrend, detect, validate, rank), the cycle at the top of the list (with the highest cycle strength score) will provide us the information on the dominant cycle. In fact, the wavelength of this cycle is the dominant market vibration, which is very useful for cycle prediction and forecasting. However, not only is the result limited to the cycle length (we not only have the dominant cycle length) but we also know—and this is very important—the current phase status of this cycle (Important: not the averaged phase over the full data set). This allows us to provide more valid cycle projections on the "right" side of the chart for trading instead of using the normally used "averaged" phase status over the full data set for this cycle.Spectral Averaging Cycles are not static, so what? Most cycle analysts have seen the moment-to-moment fluctuations, often referred to as shifts in the length and phase of a cycle, which are common in continuous measurements of a supposedly steady spectrum of financial records. But wait, we should know that cycles are not static in real life. We can see these variations in size and phase for each cycle length in the updated spectrogram after a new data point/bar is added to the data set. Although small variations are certainly no cause for concern, we should remember that we do not live in an ideal, noise-free world, but we are always looking for ways to reduce the phase shifts and variations in dominant cycles. Therefore, instead of additional smoothing of the input data, we will apply averaging to the received cycle spectrum. The basic idea of averaging to reduce spectral noise is the same as averaging - or smoothing - the input signal. However, touching the raw input signal with the averaging reduces data accuracy and results in a delay of the signal. This is not what we need, especially because the end of the data set - the current time - is of the utmost importance when working with cycles for financial time series data. Therefore, averaging the input and adding a delay to our series of interest is a bad idea for cycle analysis in data sets where the most recent current data points are more important than data points from long ago. Dont smooth the input - average the spectrum! Averaging a spectrum can reduce fluctuations in cycle measurements, making it an important part of spectrum measurements without changing the input signal or adding delay. Spectral averaging offers a different approach and is a kind of ensemble averaging, meaning that the "sample" and "average" are both cycle spectra. The "average" spectrum is obtained by averaging the "sample" spectra. However, due to the nature of the spectra, this is not as simple as this. If you apply a discrete Fourier transform routine to a set of real-world samples to find a spectrum, the output is a set of complex numbers representing the magnitude and phase of that spectrum. Calculating an average spectrum involves averaging over common frequencies in several spectra. Spectral averaging eliminates the effect of phase noise. The size of the spectrum is independent of time shifts in the input signal, but the phase can change with each data set. By averaging the power spectra and taking the square root of the result, we eliminate the effect of phase variation. Reducing the noise variance helps us to distinguish small real cycles from the largest noisy peaks. The result of spectral averaging is an estimate of the spectrum containing the same amount of energy as the source. Although the noise energy is preserved, the variance (noise fluctuation) in the spectrum is reduced. This reduction can help us reduce the detection of "false cycles" that are the result of single, one-time "noise" peaks. Lets illustrate the concept of spectral averaging with a simple example. This method is a derivative of the so-called Bartlett method, if no other window than a rectangular window is applied to the data sections. Cycle detection example We use a constructed, simplified data-set which consists of 2 cycles, noise and different trends. The two cycles have a length of 45 and 80 days: Raw input signal consisting of cycles, noise and trends Let us have a look at the different spectrogram results without (1) and with (2) spectral averaging. The first diagram shows the basic amplitude spectrum of the signal without spectral averaging. Chart 1: Cycle spectrum without spectrum averaging It clearly shows the cycles with a amplitude peak at 45 and 80 days, but you can identify lower peaks in the spectrum that have nothing in common with real cycles of the original dataset. These smaller amplitude peaks are "false cycle" - noise only. We want to avoid using these cycles in cycle forecasting models. Since it is important to identify and separate peaks based on noise - we will see if the method of averaging in the spectrum offers some added value. The next diagram uses the same input signal, but runs not just once to generate the spectrum, but several spectra to form the spectral average. In our case, the window only changes the beginning of the series and always uses the same end point for each spectrum. This ensures that we have overlapping windows, always using the last available closer/bar and changing only the beginning of the series. Chart 2: Cycle spectrum with spectrum averaging The result illustrates that the peak values for 45 and 80 days are still present. But now the "noise" level has become much lower, and due to the averaging of the spectral window analysis, we have a lower number of "false cycle peaks". This will help us distinguish between important and unimportant cycles for future cycle prediction modeling techniques. References: Shlomo Engelberg (2008) "The Spectral Analysis of Random Signals", in: Digital Signal Processing. Signals and Communication Technology. Springer, London. https://doi.org/10.1007/978-1-84800-119-0_7 Oppenheim and Schafer (2009): "Discrete-Time Signal Processing", Chapter 10, Prentice Hall, 3rd edition. Bartlett (1950): "PERIODOGRAM ANALYSIS AND CONTINUOUS SPECTRA", Biometrika, Volume 37, Issue 1-2, June 1950, Pages 1–16, https://doi.org/10.1093/biomet/37.1-2.1 Wikipedia: "Bartlett Method", https://en.wikipedia.org/wiki/Bartlett%27s_method Shearman (2018): "Take Control of Noise with Spectral Averaging", https://www.dsprelated.com/showarticle/1159.php Endpoint flattening The digital signal processing (e.g. Discrete Fourier Fransform) assumes that the time domain dataset is periodic and repeats. Suppose a price series starts at 3200 and toggles and wobbles for 800 data samples and ends at the value 2400. The DFT assumes that the price series starts at zero, suddenly jumps to 3200, goes to 2400, and suddenly jumps back to zero and then repeats. The DFT has to create all sorts of different frequencies in the frequency domain to try to achieve this kind of behavior. These false frequencies, generated to match the jumps and the high average price, mask the amplitudes of the true frequencies and make them look like noise. Fortunately, this effect can be nearly eliminated by a simple technique called endpoint flattening. Example The following chart shows an example data series (green) and the de-trended data at the bottom panel (gold) without endpoint flattening: The next example shows the same data series now with endpoint flattening applied to the detrended series: The difference is only visible at the beginning and the end on both de-trended series. While the first one starts below 0 and ends well above 0, the second chart shows that the de-trended series starts and ends at zero. Math formula Calculating the coefficients for endpoint flattening is simple: Taking n closing prices. If x(1) represents the first price in the sampled data series, x(n) represents the last point in the data series, and xf(i) equals the new endpoint flattening series then: We can see that when i=1 then xf(1)=0 and when i=n then xf(n) =0. What we’ve done is subtract the beginning value of the time series to make the first value equal to zero and then rotate the rest of the time series such that the end point is now zero. This technique reduces the endpoint distortion but introduces a low frequency artifact into the Fourier Frequency spectrum. Fortunately we won’t be looking for frequencies in that range so this distortion will have minimal impact. C# .NET Function example Apply end-point flattening to an array of double values: public void endpointflattening(double[] values) { int datapoints = values.Count(); double a = values[0]; double xn = values[datapoints - 1]; double b = (xn - a) / (datapoints - 1); for (int i = 0; i < datapoints; i++) { values[i] = values[i] - (a + (b * i)); } } Further Reading & References: Dennis Meyers Working Papers On Walk-Forward Optimization with Algorithmic Trading Strategies (meyersanalytics.com)