Developing a Robust Trading Strategy from the Lorentzian
Classification Algorithm on TradingView
I. Executive Summary
The Lorentzian Classification (LC) algorithm, a sophisticated machine learning
classification model, offers a novel approach to analyzing historical financial data. It
distinguishes itself by employing a Lorentzian distance metric instead of the more
common Euclidean distance. This choice renders the algorithm more robust to outliers
and noise, better suited for time-series data, and capable of capturing non-linear
market relationships, thereby providing a valuable tool for predicting future price
movements.1
Transforming this powerful indicator into a comprehensive trading strategy requires a
structured approach. Key components include defining precise entry and exit rules,
integrating robust trend and volatility filters (such as Exponential Moving Averages
(EMA), Supertrend, and Average Directional Index (ADX)), and implementing a
disciplined risk management framework that incorporates Average True Range
(ATR)-based stop-losses and dynamic position sizing.5
Successful implementation of a Lorentzian Classification-based strategy necessitates
careful parameter optimization, continuous signal confirmation with traditional price
action and volume analysis, and an adaptive approach to different market regimes.
While the LC algorithm provides highly advanced signals, it should be viewed as a
potent tool within a broader, disciplined trading system, rather than a standalone
predictor. Adherence to sound risk management principles is paramount for long-term
viability.3
II. The Lorentzian Classification Algorithm: A Deep Dive
A. Theoretical Foundation: Lorentzian Distance vs. Euclidean Distance in Market
Analysis
The Lorentzian Classification (LC) is a specialized TradingView indicator developed by
jdehorty and their team.1 It functions as a Machine Learning classification algorithm,
adept at categorizing historical data from a multi-dimensional feature space. Beyond
mere categorization, the LC algorithm demonstrates utility in predicting the direction
of future price movements.2
Traditional Nearest Neighbor (NN)-based search algorithms commonly default to
Euclidean distance as their metric for measuring similarity. However, this approach
often proves suboptimal for financial market data. Financial markets are frequently
impacted by significant, non-linear events, such as Federal Open Market Committee
(FOMC) meetings or "Black Swan" occurrences, which can introduce considerable
distortions and non-uniform behavior into price movements.2 Euclidean distance
treats all differences between data points equally, failing to account for the unique
characteristics of market data.3
In contrast, Lorentzian distance, a concept borrowed from spacetime geometry, offers
a more nuanced measure of similarity.3 Its logarithmic formulation, expressed as d(x,y)
= Σ ln(1 + |xi - yi|), inherently addresses several critical challenges in financial
time-series analysis. This formulation naturally handles scale invariance, ensuring that
large price movements do not disproportionately overwhelm smaller, yet significant,
patterns. It also provides outlier robustness, effectively dampening the influence of
extreme values rather than allowing them to dominate the analysis. Furthermore,
Lorentzian distance is better equipped to capture the non-linear relationships
inherent in market behavior than purely linear metrics.3 Empirical studies lend support
to the notion that Lorentzian distance exhibits greater resilience to outliers and noise
compared to Euclidean distance.2
This fundamental difference in how Lorentzian distance models market dynamics is
significant. The explicit mention of its origin in Einstein's theory of General Relativity 2
suggests a philosophical departure from traditional market analysis. While Euclidean
distance implicitly assumes a "flat" or uniform market space where all deviations carry
equal weight, Lorentzian distance implies a "curved" market space. In this curved
space, the significance of price movements is relative to their context and magnitude,
which effectively reduces the impact of extreme, outlier events. This characteristic
allows Lorentzian space to better accommodate the "warping of price-time" by
compressing the Euclidean neighborhood, causing new neighborhood distributions to
cluster around major feature axes. This approach aligns intuitively with market
behavior, as price action following a significant event might resemble previous similar
events.2 Consequently, the algorithm exhibits enhanced resilience to the "noise" and
"outliers" that frequently appear in financial data. By de-emphasizing extreme
deviations, it aims to identify more meaningful and robust patterns, thereby improving
the signal-to-noise ratio in its predictions. For practitioners, this implies that the LC
algorithm is designed to be less prone to false signals triggered by sudden,
anomalous price spikes or drops. This suggests a potentially more stable and reliable
foundation for a trading strategy in volatile markets, though it does not negate the
necessity for further filtering or comprehensive risk management.
B. Core Mechanics: K-Nearest Neighbors (KNN) with Relativistic Weighting
At its core, the Lorentzian Classification algorithm operates by searching historical
market states for patterns that are similar to current conditions. It accomplishes this
through a K-Nearest Neighbors (KNN) approach.3 Once similar historical patterns are
identified, each "neighbor" is assigned a weight that is inversely proportional to its
Lorentzian distance from the current market state. This is formalized by the weighting
function: w = 1 / (1 + distance).3 This weighting creates a "gravitational" effect, where
historical patterns that are closer in Lorentzian space exert a stronger influence on
the current prediction.3
Two crucial parameters govern this core mechanism:
● K Nearest Neighbors: This parameter directly controls the number of historical
patterns considered similar to the current conditions.3 The default value for this
setting is 8, with an optimal range typically falling between 5 and 8 for most
markets.2 A lower 'K' value means the prediction relies on fewer, very close
historical matches. This makes the strategy highly responsive to immediate
market changes but potentially more susceptible to noise or fleeting patterns.
Conversely, a higher 'K' value smooths out these fluctuations by averaging more
neighbors, which can lead to more robust signals but introduces a degree of lag.3
● Historical Lookback: This setting determines the depth of historical data the
algorithm scans to find similar patterns.3 An optimal range for most timeframes is
typically between 100 and 200 bars, although the configurable range extends
from 50 to 500 bars.3 A deeper historical lookback can enhance pattern
recognition by providing more context, but it may also reduce the indicator's
adaptability to very recent market changes. Conversely, a shorter lookback makes
the model more adaptive to current market shifts but might miss broader,
longer-term cycles. The broader "Max Bars Back" setting, with a default of 2000,
controls the overall data range available for backtesting and can be reduced if the
script experiences slow loading times.2
The specific values chosen for 'K' and 'Historical Lookback' directly determine the
trade-off between the strategy's responsiveness to immediate market conditions and
its robustness derived from a broader historical context. Inappropriate settings can
lead to either over-sensitivity, resulting in frequent whipsaws, or excessive lag,
causing missed trading opportunities. This highlights that "optimal" settings are not
universal but are highly dependent on the specific asset, trading timeframe, and
prevailing market characteristics. Traders must engage in thorough backtesting and
iterative optimization to identify the parameter combination that best suits their
trading style and the instrument being traded. The recommended ranges, such as 5-8
for K, serve as a valuable starting point for this crucial tuning process.
C. Feature Engineering: Constructing the Multi-Dimensional Market State Vector
The effectiveness of the Lorentzian Classification algorithm heavily relies on its ability
to accurately represent current market conditions through a multi-dimensional feature
vector.3 One variant of the algorithm utilizes a 12-dimensional representation 3, while
another allows users to specify between 2 and 5 features, with a default of 5.7
These features are carefully engineered to combine various aspects of price, volume,
volatility, and momentum 3:
● Price Features: These are crucial for detecting momentum and reversals. They
include multi-timeframe momentum analysis with lookbacks of 1, 2, 3, 5, and 8
bars.3
● Volume Features: Essential for identifying institutional activity, these involve
relative volume analysis against a 20-period average.3
● Volatility Features: Key for recognizing shifts in market regimes, these
incorporate Average True Range (ATR) and Bollinger Band width normalization.3
● Momentum Indicators: Vital for confirming trends, these include Relative
Strength Index (RSI) deviation from neutral and the Moving Average Convergence
Divergence (MACD)/price ratio.3
Another variant of the model supports features such as "RSI," "WT" (WaveTrend),
"CCI" (Commodity Channel Index), and "ADX" (Average Directional Index).2 These
features can accept one or two parameters (Parameter A, Parameter B), and a notable
aspect is that the same feature configured with different settings is treated as two
separate features, increasing the dimensionality and nuance of the input space.2 The
default values for these features are optimized for 4-hour to 12-hour timeframes.2 To
ensure equal weighting during distance calculations, each feature undergoes
min-max normalization.3 Additionally, a "Feature Window" parameter (ranging from 5
to 30) controls the length of the window used to capture context for these market
features. Longer windows incorporate more historical context but can reduce the
indicator's sensitivity.3
The description of feature engineering reveals a level of sophistication that extends
beyond merely incorporating standard indicators. The use of a "12-dimensional
representation" 3 and the ability to treat "the same feature with different settings" as
distinct inputs 2 point to a potentially complex, non-linear transformation of raw
market data into the inputs for the Lorentzian algorithm. The quality and relevance of
these engineered features are paramount for any machine learning model's
performance. The specific design and normalization of these features directly dictate
what patterns the Lorentzian algorithm "perceives" as similar in the market. If the
features are not robust, representative, or well-tuned to the market's underlying
dynamics, even the advanced Lorentzian distance metric will be operating on flawed
input, leading to suboptimal or unreliable signals. This is precisely why the default
values are "optimized for 4H to 12H timeframes".2 For traders seeking to truly optimize
or adapt this strategy beyond its default settings, a deeper understanding of this
feature engineering process is crucial. Without it, attempts at optimization might be
akin to tuning a black box. This also implies that if a trader significantly deviates from
the recommended timeframes (e.g., scalping on 1-minute charts), they might need to
re-evaluate and potentially re-engineer the features themselves to maintain predictive
power.
D. Signal Generation: From Prediction Values to Confidence Levels
The process of generating trading signals from the Lorentzian Classification algorithm
involves a systematic sequence of steps for each current market state:
1. Feature Vector Construction: A multi-dimensional representation of the current
market conditions is first created using the engineered features.3
2. Historical Search: The algorithm then scans the defined historical lookback
period for patterns that are similar to the current feature vector, employing the
Lorentzian distance metric for similarity measurement.3
3. Neighbor Selection: Based on their Lorentzian distance, the K nearest historical
matches are identified.3
4. Outcome Analysis: For each identified historical match, the algorithm examines
what occurred N bars (defined by the Prediction Horizon) after that match.3
5. Weighted Prediction: The observed outcomes from the selected neighbors are
then combined. This combination uses distance-based weights, ensuring that
closer historical patterns exert a stronger influence on the final prediction.3
6. Confidence Calculation: Finally, the level of agreement among the neighbors'
outcomes is measured to determine the confidence level associated with the
prediction.3
Several parameters directly influence this signal generation process:
● Prediction Horizon (1-20): This parameter defines how far ahead the indicator
attempts to predict market movement. Shorter horizons are typically suited for
scalping strategies, while longer horizons are more appropriate for swing trading
approaches.3
● Signal Threshold (0.5-0.9): This critical parameter sets the minimum confidence
level required for a signal to be generated.3 A higher threshold reduces the
number of false signals but may result in the indicator missing potential trading
opportunities.3
● Smoothing (1-10): An Exponential Moving Average (EMA) is applied to the raw
predictions. While more smoothing reduces noise in the signals, it inherently
introduces more lag.3
Specific conditions trigger long and short signals:
● Long Signal Conditions: A long signal is generated when the prediction value is
greater than a specified threshold AND it is accompanied by high
confidence.3
● Short Signal Conditions: A short signal occurs when the prediction value is less
than a specified negative threshold AND it is also accompanied by high
confidence.3
The "Signal Threshold" parameter (0.5-0.9) represents a direct control mechanism
over the balance between the quantity and quality of trading signals. A higher
threshold implies a stricter requirement for the model's confidence in its prediction.
Setting a higher signal threshold will lead to fewer, but potentially more reliable,
trading signals, as only the highest conviction predictions will trigger an entry.
Conversely, a lower threshold will generate more signals, increasing trading frequency
but also potentially increasing the rate of false positives or lower-quality trades. This
choice directly impacts the strategy's win rate, profit factor, and overall risk exposure.
This parameter is a critical lever for tailoring the strategy to a trader's specific risk
appetite and the prevailing market conditions. In volatile markets, for instance, a
higher confidence threshold might be prudent to filter out noise, as explicitly stated in
the documentation.3 This reinforces the concept that optimal parameters are dynamic
and require continuous adjustment.
III. Crafting the Trading Strategy: Entry and Exit Logic
A. Translating Indicator Signals into Actionable Entries
The core of converting the Lorentzian Classification indicator into an actionable
trading strategy lies in defining precise entry rules based on its signals. The Lorentzian
Classification algorithm provides a prediction, often a continuous value that is then
categorized, along with an associated confidence level.3
For a long entry, the strategy requires the Lorentzian prediction value to be greater
than a specified threshold and to exhibit high confidence.3 Crucially, for the
"Lorentzian Classification Strategy" variant, this signal must be confirmed by the
closing price being above the 200-period Exponential Moving Average (EMA). This
EMA acts as a primary trend filter, ensuring that long trades are only considered
within an established uptrend.5
Conversely, for a short entry, the strategy seeks a Lorentzian prediction value that is
less than a specified negative threshold, also accompanied by high confidence.3 This
signal must be confirmed by the closing price being below the 200-period EMA,
aligning short trades with a prevailing downtrend.5
The following table summarizes these critical entry conditions:
Table 1: Lorentzian Signal Trigger Conditions
Signal Type Lorentzian Confidence Trend Filter Recommended
Prediction Level Condition (200 Signal
Value Condition EMA) Threshold
Condition Range
Long Entry > Signal High Confidence Close > 200 0.5 - 0.9
Threshold EMA
Short Entry < -Signal High Confidence Close < 200 0.5 - 0.9
Threshold EMA
B. Enhancing Entries with Trend and Volatility Filters
Beyond the core Lorentzian signal and the 200 EMA, the strategy incorporates a
multi-layered filtering approach to enhance the quality and reliability of entry signals.
The 200-period Exponential Moving Average (EMA) serves as a fundamental trend
filter. By requiring the closing price to be above the 200 EMA for long positions and
below it for short positions, the strategy ensures that trades align with the prevailing
market direction, thereby reducing counter-trend entries.5 The Supertrend indicator
is also utilized for additional trend confirmation and plays a role in defining exit
conditions.5
To further refine the machine learning model's predictions, the strategy offers various
configurable boolean filter settings 7:
● Use Volatility Filter: This setting (default: true) enables or disables filtering
based on current market volatility.7
● Use Regime Filter: (Default: true) This filter controls the use of a trend detection
mechanism.2 It includes a configurable Regime Threshold (default -0.1, range -10
to 10) for identifying trending or ranging market conditions.2
● Use ADX Filter: (Default: false) This option controls the application of the
Average Directional Index (ADX) filter, which assesses trend strength.2 It has a
configurable ADX Threshold (default 20, range 0 to 100), where an ADX value
above 25 typically confirms strong trends.7
● Use EMA Filter (General): (Default: false) This general EMA filter has a
configurable Period (default 200, range 1 to 500).7
● Use SMA Filter: (Default: false) Similar to the EMA filter, this Simple Moving
Average (SMA) filter has a configurable Period (default 200, range 1 to 500).7
● Use Kernel Filter: (Default: true) This setting enables or disables trading based
on the Kernel's output.7
● Enhance Kernel Smoothing: (Default: false) This option utilizes a
crossover-based mechanism to smooth kernel color changes, potentially leading
to fewer color transitions and a higher frequency of machine learning entry
signals.7
The strategy's design incorporates a multi-layered filtering system. This involves the
inherent feature engineering within the Lorentzian Classification model, explicit
external trend filters like the 200 EMA and Supertrend, and a comprehensive set of
configurable filters (Volatility, Regime, ADX, etc.). While this redundancy might appear
excessive, its purpose is to enhance the robustness and quality of entry signals by
ensuring that multiple criteria are met before a trade is initiated. Each additional filter
functions as a "gatekeeper," reducing the frequency of trade signals but aiming to
improve their probability of success by ensuring trades are only taken in favorable
market conditions (e.g., strong trend, low volatility). This layered approach directly
influences the strategy's trade frequency, win rate, and average profit per trade.
However, the sheer number of configurable filters presents a significant optimization
challenge. While these filters offer considerable flexibility, they also increase the risk
of "curve-fitting" during backtesting, where parameters are overly tuned to past data
and subsequently fail in live trading. This underscores that successful implementation
requires extensive and rigorous out-of-sample testing, coupled with a deep
understanding of how each filter interacts with the core Lorentzian signal and the
overall market dynamics.
C. Defining Comprehensive Exit Strategies
Effective exit strategies are as crucial as robust entry signals for the long-term
profitability of any trading system. The Lorentzian Classification strategy offers
distinct approaches to position exits: fixed (default) exits and dynamic exits.
Fixed (Default) Exit Conditions:
These exits are based on a predefined holding period, specifically exactly 4 bars.7 This fixed
duration aligns with the predefined trade length used during the model's initial training
process.7
● endLongTradeStrict: This condition is met if a long position has been held for
precisely four bars, and the last signal was a buy signal. Alternatively, it triggers if
the position has been held for less than four bars, but a new sell signal appears
while the last signal was a buy, provided the initial long trade commenced four
bars prior.7
● endShortTradeStrict: Similarly, this condition is met if a short position has been
held for exactly four bars, and the last signal was a sell signal. Alternatively, it
triggers if held for less than four bars, but a new buy signal appears while the last
signal was a sell, provided the initial short trade commenced four bars prior.7
Dynamic Exit Conditions:
These exits aim to maximize profits by dynamically adjusting the exit threshold, primarily
based on kernel regression logic.7
● endLongTradeDynamic: This condition is met if there is a bearish change in the
kernel, and the previous bar indicated a valid long exit.7
● endShortTradeDynamic: This condition is met if there is a bullish change in the
kernel, and the previous bar indicated a valid short exit.7
● A critical point is that dynamic exits are only considered valid (isDynamicExitValid
is true) if the useEmaFilter, useSmaFilter, and useKernelSmoothing settings are all
set to false.7 This implies a significant trade-off: traders must choose between
leveraging these smoothing/filtering options for signal generation or enabling
dynamic exits, as they cannot be used simultaneously.
Profit Targets and Trailing Stops (from a combined strategy):
The "Lorentzian Classification Strategy" 5 implements a tiered risk/reward approach for
profit-taking. It targets an initial 1:1 risk/reward (R/R) ratio for a portion of the position, while
the remaining stakes aim for a more ambitious 3:1 R/R ratio, indicating a partial profit-taking
strategy.5 Profit targets can also be determined based on the historical performance of
similar trade setups.3
Beyond fixed stop losses, traders are advised to consider using a trailing stop that
moves with the price to lock in profits as a trade progresses.5 Alternatively, setting the
stop loss below recent swing lows for long positions or above recent swing highs for
short positions aligns stops with prevailing market structure.5 For long positions, the
remaining portion of the trade is closed if a sell signal is indicated by either the
Lorentzian model or the Supertrend indicator.5
The strategy presents a dichotomy in its exit logic: a "default 4-bar exit" that aligns
with the model's training horizon 7 versus "dynamic exits" designed to "let profits
ride".7 This reveals a fundamental tension between adhering to the inherent
short-term predictive nature of the core Lorentzian model and the broader objective
of maximizing profits through adaptive trade management. The critical dependency
that dynamic exits are only valid when certain filters are off 7 further complicates this
choice. The selection of an exit strategy profoundly impacts the average holding
period, the strategy's profit factor, and its overall profitability profile. Fixed exits might
lead to leaving significant profits on the table during strong trends but provide
consistent, short-term validation. Dynamic exits aim for larger gains but introduce
greater complexity and potential instability if not carefully managed, especially given
their specific compatibility requirements with other filters. This highlights that a
comprehensive trading strategy is far more than just entry signals; robust and
adaptable exit management is equally, if not more, critical for long-term success.
Traders must carefully weigh the trade-offs between adhering to the model's training
characteristics and implementing more flexible, profit-maximizing exit rules,
understanding the implications for overall strategy performance and stability.
The following table provides a concise and comparative overview of the two primary
exit mechanisms:
Table 2: Strategy Exit Conditions Summary
Exit Type Primary Trigger Secondary Compatibility Purpose/Benefi
Condition Trigger/Confir Notes t
mation
Default/Fixed Position held for New opposing Always active (if Adheres to
4 bars signal (if held < useDynamicExit model training,
4 bars) s is false or consistent exits,
isDynamicExitVa short-term
lid is false) validation
Dynamic Bearish/Bullish Previous bar Requires Attempts to
change in kernel had valid exit useEmaFilter, maximize
useSmaFilter, profits, adaptive
and to market shifts
useKernelSmoot
hing to be false
IV. Implementing a Robust Risk Management Framework
Effective risk management is the cornerstone of any sustainable trading strategy,
regardless of the sophistication of its underlying algorithms. While the Lorentzian
Classification algorithm aims to generate high-quality signals and various filters
further refine these, no trading signal is infallible. Therefore, the ultimate protection
for a trading account lies in a robust risk management framework.
A. Position Sizing and Capital Allocation Principles
Disciplined position sizing is paramount for capital preservation. The strategy
advocates for percentage-based risk management, where position size is
calculated as a fixed percentage of the trading capital per trade, rather than a fixed
dollar amount. This approach allows for compounded growth while inherently
managing risk relative to account size.5
A critical guideline is to never exceed 2-3% of total trading capital per trade.3 This
strict limit minimizes the impact of any single losing trade on the overall portfolio,
preventing catastrophic losses and allowing the strategy to withstand drawdowns.
Furthermore, the strategy encourages confidence-based sizing, suggesting that
larger positions can be considered for signals that exhibit higher confidence levels.3
This introduces an adaptive element to risk, aligning capital deployment with the
perceived quality of the signal. The strategy's backtesting capabilities also include
applying leverage settings in risk management for more accurate backtesting results,
allowing traders to model the impact of borrowed capital.5
Proper position sizing directly controls the maximum potential loss on any single
trade. By limiting this loss, even a series of losing trades will not catastrophically
deplete the trading capital, allowing the strategy to survive drawdowns and capitalize
on subsequent winning streaks. Confidence-based sizing further refines this by
dynamically adjusting exposure based on the perceived quality of the signal,
potentially boosting returns during high-conviction periods while reducing risk during
uncertain ones. This section underscores that even with the most advanced machine
learning algorithms, the foundational principles of sound risk management remain
paramount. The "scarily accurate" 9 perception of the indicator could, paradoxically,
lead to overconfidence and excessive risk-taking, which is precisely what robust risk
management aims to prevent. Consistent profitability is ultimately more about
managing risk than about perfect prediction.
B. Stop-Loss Mechanisms: ATR-Based and Structural Stop Placement
The placement of stop-losses is a critical component of risk management, directly
influencing the maximum loss on a trade and the strategy's susceptibility to market
noise. A static, fixed-percentage stop-loss can be detrimental in varying market
conditions. In highly volatile environments, a fixed stop might be too tight, leading to
premature exits (whipsaws). In calm markets, it might be unnecessarily wide, risking
more capital than required.
The Lorentzian Classification strategy primarily utilizes the Average True Range
(ATR) indicator to set stop-loss levels.5 ATR, by its very definition, measures market
volatility. By directly linking the stop-loss distance to the Average True Range, the
strategy dynamically adjusts its risk exposure to the prevailing market volatility. This
provides a dynamic stop-loss that expands during volatile periods and contracts
during calmer ones. This allows for more intelligent trade management, potentially
reducing the frequency of premature exits in choppy markets while providing
adequate room for price movement during trending phases.
Beyond ATR, traders are advised to consider using a trailing stop that moves with the
price to lock in profits as a trade progresses.5 Alternatively, setting the stop loss below
recent swing lows for long positions or above recent swing highs for short positions
aligns stops with underlying market structure.5 Furthermore, stop losses should be
adjusted in response to market regimes; for instance, wider stops may be
necessary in volatile market conditions to avoid being stopped out prematurely by
increased price swings.3 This sophisticated approach to stop-loss placement is a key
enhancement over simpler fixed stops, demonstrating a deeper understanding of
market dynamics within the strategy. It aligns with the broader theme of market
regime adaptation, ensuring that the risk management framework is as dynamic as
the market itself.
C. Optimizing Risk/Reward Ratios (1:1, 3:1)
The strategy implements a tiered approach to profit-taking, reflecting a pragmatic
understanding of market behavior. An initial 1:1 risk/reward (R/R) ratio is used for
initial positions, meaning that the profit target is equal to the stop-loss distance. For
the remaining stakes, a more ambitious 3:1 R/R ratio is targeted.5 This tiered
approach implies partial profit-taking, where a portion of the position is closed at a
smaller, more probable profit, while the remainder is allowed to run for potentially
larger gains.
This phased profit-taking approach balances the desire for high-probability, smaller
wins with the potential for larger, less frequent, trend-following profits. The initial 1:1
target serves to quickly de-risk the trade by securing some profits, thereby reducing
exposure and psychological pressure. The remaining portion is then free to run for
larger, less frequent gains. This mitigates the risk of a market reversal wiping out all
gains on a position that has already moved favorably. Profit targets can also be
determined based on the historical performance of similar trade setups.3 This is a
common and effective strategy in trend-following systems, as it provides a more
robust profit-taking mechanism than a single, fixed target. It reflects a nuanced
understanding of market behavior, where some trends extend significantly while
others are short-lived.
D. Adapting Risk Management to Market Regimes
A crucial aspect of robust risk management is its adaptability to varying market
conditions. It is essential to recognize the prevailing market phase—whether it is
ranging, trending, or highly volatile—and adjust the trading approach and risk
management parameters accordingly.5
In volatile market conditions, the strategy suggests requiring higher confidence
thresholds for signals 3 and potentially employing wider stop losses 3 to avoid being
stopped out prematurely by increased price swings. Conversely, in low volume
environments, it is recommended to reduce position sizes and increase caution.3 This
dynamic adjustment of risk parameters based on market behavior enhances the
strategy's resilience and improves its performance across different market cycles.
V. Practical Application and Optimization on TradingView
A. Key Strategy Settings and Parameters
Translating the Lorentzian Classification algorithm into a functional TradingView
strategy necessitates careful configuration of its various settings. These parameters
allow traders to fine-tune the algorithm's behavior to specific market characteristics,
trading timeframes, and personal risk tolerance.
● Neighbors Count (1-100, default 8): This parameter determines the number of
historical patterns considered for the KNN classification. An optimal range is
typically 5-8 for most markets. More neighbors lead to smoother but less
responsive signals.2
● Max Bars Back (default 2000): Controls the range of historical data used for
backtesting. Reducing this value can speed up script loading if performance is an
issue.2
● Prediction Horizon (1-20): Defines how far ahead the algorithm attempts to
predict price movement. Shorter horizons suit scalping, while longer horizons are
better for swing trading.3
● Signal Threshold (0.5-0.9): Sets the minimum confidence level required for a
signal to be generated. Higher values reduce false signals but may cause missed
opportunities.3
● Smoothing (1-10): Applies an EMA to raw predictions. More smoothing reduces
noise but introduces more lag.3
● Feature Count (2-5, default 5): Specifies the number of features used for
machine learning predictions.7
● Color Compression (1-10, default 1): Adjusts the intensity of the color scale.2
● Lookback Window (3-50, default 8): Controls the number of bars used for
estimation, with a recommended range of 3-50.2
● Relative Weighting (0.25-25, default 8): Controls the relative weighting of
timeframes.2
The following table provides a summary of recommended parameters and their
impact:
Table 3: Recommended Lorentzian Strategy Parameters
Parameter Typical Range Default Value Impact on Strategy
Neighbors Count 5-8 (optimal) 8 Lower: more
(K) responsive,
potentially noisy;
Higher: smoother,
more lagged.
Max Bars Back 50-2000 2000 Controls backtesting
data range; reduce
for faster loading.
Prediction Horizon 1-20 N/A Shorter: scalping;
Longer: swing
trading.
Signal Threshold 0.5-0.9 N/A Higher: fewer, more
reliable signals;
Lower: more signals,
higher false positive
risk.
Smoothing 1-10 N/A Higher: less noise,
more lag.
Feature Window 5-30 N/A Longer: more
context, less
sensitivity.
Lookback Window 3-50 8 Number of bars for
estimation.
Relative Weighting 0.25-25 8 Controls time frame
weighting.
B. Advanced Filtering Techniques and Their Configuration
The strategy offers a comprehensive set of configurable filters to refine the core
Lorentzian signals and align them with specific market conditions. These filters act as
additional layers of confirmation, aiming to reduce false positives and improve trade
quality.
● Use Volatility Filter (default: true): Enables or disables filtering based on
market volatility, which can be crucial for adapting to different market
environments.7
● Use Regime Filter (default: true): Activates a trend detection filter with a
configurable Regime Threshold (default: -0.1, range: -10 to 10) to identify
trending or ranging markets. This is particularly useful for ensuring trades are
taken in appropriate market phases.2
● Use ADX Filter (default: false): Enables an ADX filter, which measures trend
strength. It has a configurable ADX Threshold (default: 20, range: 0 to 100), with
values above 25 typically indicating a strong trend.2
● Use EMA Filter (default: false) and Use SMA Filter (default: false): These
allow for additional moving average filtering, each with a configurable Period
(default: 200, range: 1 to 500).7
● Use Kernel Filter (default: true): Enables or disables trading based on the
Kernel's output.7
● Show Kernel Estimate (default: false): Allows visualization of the Kernel
Estimate.7
● Enhance Kernel Smoothing (default: false): Uses a crossover-based
mechanism to smooth kernel color changes, potentially leading to fewer color
transitions and more machine learning entry signals.7
The strategy's reliance on multiple layers of filtering, from inherent feature
engineering to explicit external indicators and configurable internal filters, creates a
complex system. Each additional filter acts as a "gatekeeper," reducing the frequency
of trade signals but aiming to improve their probability of success by ensuring trades
are only taken in favorable market conditions. This layered approach directly impacts
the strategy's trade frequency, win rate, and average profit per trade. While these
filters offer considerable flexibility, they also increase the risk of "curve-fitting" during
backtesting, where parameters are overly tuned to past data and subsequently fail in
live trading. This underscores that successful implementation requires extensive and
rigorous out-of-sample testing, coupled with a deep understanding of how each filter
interacts with the core Lorentzian signal and the overall market dynamics.
C. Backtesting and Performance Evaluation Considerations
Thorough backtesting is indispensable for validating and optimizing any trading
strategy. When evaluating the Lorentzian Classification strategy on TradingView,
several critical considerations must be addressed to ensure realistic and reliable
performance metrics.
● Accounting for Commissions, Slippage, and Leverage: Backtesting results can
be significantly inflated if real-world trading costs are not factored in. It is
essential to configure backtesting settings to account for commissions, which are
fees paid per trade, and slippage, which is the difference between the expected
price of a trade and the price at which the trade is actually executed. The
strategy's backtesting capabilities also allow for the application of leverage
settings in risk management, which is crucial for accurately modeling the impact
of borrowed capital on returns and risk.5
● Interpreting Backtest Results and Optimizing for Robustness: Backtest
results should be interpreted with a critical eye. High profitability in a backtest
does not guarantee future success, especially if the strategy is over-optimized to
past data (curve-fitting). Traders should focus on metrics beyond just net profit,
such as drawdowns, profit factor, win rate, and average trade duration. Optimizing
for robustness involves testing the strategy across different market conditions,
timeframes, and assets, and seeking parameter sets that perform consistently
rather than exceptionally in one specific historical period. Performance reviews
should be conducted regularly to identify patterns in success and failure.5
D. Pine Script Considerations for Strategy Conversion
Converting a TradingView indicator to a strategy in Pine Script involves transitioning
from plotting signals to executing virtual trades. A Pine Script indicator uses plot() and
plotshape() functions to visualize data, while a strategy uses strategy.entry(),
strategy.exit(), and strategy.close() functions to simulate trades.
For the Lorentzian Classification, the core logic for generating buy/sell signals (based
on prediction value thresholds and confidence levels) would need to be translated into
strategy.entry() calls. The various filters (EMA, Supertrend, ADX, Volatility, Regime,
Kernel) would become boolean conditions that must be met before an entry is
triggered. Exit conditions, whether fixed 4-bar exits or dynamic kernel-based exits,
would be implemented using strategy.exit() or strategy.close() functions. Risk
management components, such as ATR-based stop-losses and profit targets, would
also be coded into the strategy logic. The //@version=5 directive is standard for
modern Pine Script.10 Given the complexity of implementing Lorentzian distance
calculations with k-nearest neighbors classification, it is noted that sometimes simple
aspects can be overlooked in the pursuit of advanced physics-based indicators.3 The
open-source nature of some Lorentzian scripts can facilitate review and verification of
functionality.3
VI. Best Practices, Limitations, and Continuous Improvement
A. Signal Confirmation: Combining Lorentzian Signals with Price Action, Volume,
and Multi-Timeframe Analysis
While the Lorentzian Classification algorithm offers sophisticated, machine
learning-driven signals, it is a best practice to never trade solely on these signals. All
indicators are lagging to some extent, and by the time a signal appears, a significant
portion of the price move may have already occurred.6 To enhance the reliability and
conviction of trades, Lorentzian signals should always be confirmed with other forms
of market analysis:
● Price Action: Cross-referencing Lorentzian signals with chart patterns, support
and resistance levels, and candlestick formations can significantly improve entry
and exit points.5 For instance, a long signal gains strength if it coincides with a
bounce off a strong support level.
● Volume Analysis: Introducing volume analysis helps validate breakout and
breakdown signals. Strong volume accompanying a Lorentzian signal indicates
conviction behind the price movement, whereas low volume might suggest a false
signal or lack of institutional interest.5
● Multi-Timeframe Analysis: Aligning signals across different timeframes can
provide stronger confirmation. A signal on a lower timeframe (e.g., 1-hour) gains
credibility if it is in sync with the broader trend identified on a higher timeframe
(e.g., 4-hour or daily).3
B. Market Phase Adaptation: Tailoring the Strategy to Trending, Ranging, and
Volatile Environments
Markets are dynamic and constantly shift between different phases. A robust trading
strategy must be adaptable to these changes. The interpretation and application of
Lorentzian signals should be adjusted based on the prevailing market regime:
● Trending Markets: In strong trending markets, there is higher confidence in
directional signals generated by the Lorentzian algorithm. The strategy can lean
more heavily on these signals for entries.3
● Ranging Markets: In ranging or consolidating markets, the focus should shift to
reversal signals at market extremes (ee.g., near established support or resistance
levels). Directional signals might be less reliable in such environments.3
● Volatile Markets: High volatility requires a more cautious approach. Higher
confidence thresholds for signals may be necessary to filter out noise, and stop
losses might need to be wider to accommodate increased price swings.3 In low
volume conditions, reducing position sizes is advisable.3
Recognizing the market phase and adapting the trading approach accordingly is
essential for sustained performance.5
C. The Importance of Ongoing Optimization and Performance Review
Trading strategies are not static entities; they require continuous monitoring, review,
and optimization to remain effective.
● Ongoing Optimization: Market dynamics evolve, and parameters that worked
well in the past may become less effective. Regularly re-evaluating and optimizing
the strategy's parameters (e.g., K-Neighbors, Prediction Horizon, Signal
Threshold, filter settings) based on recent market conditions is crucial. However,
this must be done carefully to avoid over-optimization or curve-fitting.
● Performance Review: Conducting regular reviews of past trades is vital. This
involves analyzing both winning and losing trades to identify patterns in success
and failure, understand why certain signals worked or failed, and uncover areas
for improvement.5 This iterative process of analysis and refinement is key to
long-term profitability.
D. Understanding the Inherent Limitations of Indicators and Machine Learning in
Trading
Despite the advanced nature of the Lorentzian Classification algorithm, it is imperative
to acknowledge the inherent limitations of all indicators and machine learning models
in predicting future price movements with certainty.3
● Lagging Nature: All technical indicators, including those powered by machine
learning, are inherently based on past price data. Even so-called "leading"
indicators are merely speculating.6 By the time an indicator generates a signal, the
price action may have already largely unfolded.6
● No Guarantee of Future Results: The Lorentzian classification system reveals
market patterns but cannot predict future price movements with certainty.3 No
indicator or strategy can guarantee profitable trading results.3 Financial markets
are complex, chaotic systems that respond to predictions, making them inherently
difficult to predict accurately.9
● Educational and Research Purposes: The Lorentzian Classification indicator is
explicitly stated to be for educational and research purposes only and does not
constitute financial advice.3 Traders must always conduct their own analysis and
never risk more than they can afford to lose.3
Relying solely on indicators or basing an entire strategy on them without
understanding price action, support/resistance, and proper risk management is likely
to lead to struggles.6 Indicators are best used for confirmation or to improve
confidence in taking or staying in a trade, rather than as the primary means of
generating setups.6
VII. Conclusion
Converting the Lorentzian Classification algorithm from a TradingView indicator into a
robust trading strategy involves a meticulous process that extends far beyond merely
identifying buy and sell signals. This report has detailed the necessary components,
from understanding the algorithm's unique Lorentzian distance metric and its feature
engineering to defining comprehensive entry and exit rules, and implementing a
disciplined risk management framework.
The Lorentzian Classification algorithm, with its ability to handle outliers and
non-linear market relationships, offers a powerful foundation for a trading system.
However, its effectiveness as a strategy is significantly amplified by integrating it with
traditional technical analysis, such as the 200 EMA and Supertrend, and by applying a
multi-layered filtering paradigm. The nuanced approach to exit management, offering
both fixed and dynamic options, provides flexibility, though it requires careful
consideration of parameter interdependencies.
Ultimately, the success of a Lorentzian Classification-based strategy hinges on a
commitment to rigorous backtesting, continuous optimization, and an unwavering
adherence to sound risk management principles. Position sizing based on a
percentage of capital, dynamic ATR-based stop-losses, and a tiered risk/reward
approach are crucial for capital preservation and sustainable growth. While machine
learning offers advanced analytical capabilities, it is imperative to acknowledge the
inherent limitations of all indicators. The Lorentzian Classification strategy serves best
as a sophisticated tool within a broader, adaptive trading methodology, confirmed by
price action, volume, and adapted to prevailing market regimes. Traders must
approach its implementation with a blend of technological acumen, analytical rigor,
and disciplined caution to navigate the complexities of financial markets successfully.
Works cited
1. www.youtube.com, accessed May 27, 2025,
https://www.youtube.com/watch?v=zs5NnkcNczY#:~:text=This%20document%20
describes%20a%20TradingView,of%20the%20typical%20Euclidean%20distance.
2. Machine Learning: Lorentzian Classification — Indicator by jdehorty ..., accessed
May 27, 2025,
https://tw.tradingview.com/script/WhBzgfDu-Machine-Learning-Lorentzian-Class
ification/
3. Lorentzian Classification - Advanced Trading Dashboard — Indicator by
DskyzInvestments, accessed May 27, 2025,
https://www.tradingview.com/script/yt4TgB3l-Lorentzian-Classification-Advanced
-Trading-Dashboard/
4. Lorentzian — Indicators and Strategies — TradingView — India, accessed May 27,
2025, https://in.tradingview.com/scripts/lorentzian/
5. Lorentzian Classification Strategy (TradingView) - 119 Backtests - TradeSearcher,
accessed May 27, 2025,
https://tradesearcher.ai/strategies/2019-lorentzian-classification-strategy
6. Most successful indicator : r/TradingView - Reddit, accessed May 27, 2025,
https://www.reddit.com/r/TradingView/comments/1426vuh/most_successful_indic
ator/
7. Lorentzian Classification Strategy by StrategiesForEveryone ... - Scribd, accessed
May 27, 2025,
https://www.scribd.com/document/785523605/Lorentzian-Classification-Strategy
-by-StrategiesForEveryone-modded
8. Trading Strategies & Indicators Built by TradingView Community, accessed May 27,
2025, https://www.tradingview.com/scripts/
9. Is this a stupid strategy? : r/stocks - Reddit, accessed May 27, 2025,
https://www.reddit.com/r/stocks/comments/167axw4/is_this_a_stupid_strategy/
10.Pine script-2 | PDF | Applied Mathematics - Scribd, accessed May 27, 2025,
https://www.scribd.com/document/854671381/Pine-script-2
11. Lorentzianclassification — Indicators and Strategies — TradingView — India,
accessed May 27, 2025, https://in.tradingview.com/scripts/lorentzianclassification/