0% found this document useful (0 votes)
26 views21 pages

Ideas Blank Slate

The document outlines a trading strategy utilizing Heikin-Ashi bars and various indicators such as ADX, DMI, and Chandelier Exit to identify potential buy signals. It details the criteria for weekly and monthly trading conditions, as well as the steps for calculating and filtering signals based on historical price data. Additionally, it specifies an exclusion rule for simultaneous B and B1 signals on the same candle when certain conditions regarding ADX and Chandelier Exit are not met.

Uploaded by

Ravi Nand
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
26 views21 pages

Ideas Blank Slate

The document outlines a trading strategy utilizing Heikin-Ashi bars and various indicators such as ADX, DMI, and Chandelier Exit to identify potential buy signals. It details the criteria for weekly and monthly trading conditions, as well as the steps for calculating and filtering signals based on historical price data. Additionally, it specifies an exclusion rule for simultaneous B and B1 signals on the same candle when certain conditions regarding ADX and Chandelier Exit are not met.

Uploaded by

Ravi Nand
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 21

Weekly loose TBC:

 Weekly Heiken ashi ADX DI positive >= ADX DI negative


 Weekly Heiken ashi high > 10 EMA weekly & (Weekly Heiken ashi high > 20 EMA weekly) &
(Weekly Heiken ashi high > 50 EMA weekly)
 Weekly donchian HL Green

Weekly tight TBC:

 Weekly Heiken ashi ADX DI positive >= ADX DI negative


 Weekly Heiken ashi high > 10 EMA weekly & (Weekly Heiken ashi high > 20 EMA weekly) &
(Weekly Heiken ashi high > 50 EMA weekly)
 Weekly donchian HL Green
 Weekly Heiken ashi ADX rising or flat (ADX > = previous week’s ADX)

Monthly tight TBC:

 Monthly Heiken ashi ADX DI positive >= ADX DI negative


 Monthly Heiken ashi high > 10 EMA monthly & (Monthly Heiken ashi high > 20 EMA monthly) &
(Monthly Heiken ashi high > 50 EMA monthly)
 Monthly donchian HL Green
 Monthly Heiken ashi ADX rising or flat (ADX > = previous week’s ADX)

Daily TBC types  FTBC, CTBC

FTBC satisifies the following

Here's a step-by-step, plain-English walkthrough of what this script does, followed by a direct
answer to your question about excluding B and B1 in the same candle:

1. Set-Up & Libraries

1. Install dependencies: It uses yfinance to pull price data and pandas/numpy for data
manipulation.
2. Import modules: Brings in pandas, numpy, yfinance, and a couple of date-time helpers.
2. User-Defined Parameters

 UNIVERSE: Either "sp500" (to screen all S&P 500 tickers) or a custom list of symbols.
 PERIOD: How far back to download (e.g., "6mo" for six months).
 LEN_DM, LEN_ADX: Look-back lengths for the DMI/ADX calculations.
 DI_THRESH: A threshold (20) used specifically for the “B1” signal.
 MAX_GAP: Defines a “lookback window” of up to 4 candles when checking if similar
signals occurred recently.
 Chandelier Exit (CE) parameters:
o CE_LEN: ATR look-back (7 candles).
o CE_MULT: Multiplier for ATR (1.8).
o USE_CLOSE: If True, the Chandelier Exit is calculated using the Heikin-Ashi
close price.

3. Wilder RMA Function (rma)

 This helper (rma) computes a Wilder-style moving average (the same smoothing
PineScript’s ta.rma uses). Internally it’s just an exponential moving average with α =
1/length.

4. Heikin-Ashi Bar Calculator (heikin_ashi)

1. HA_Close: Average of (Open + High + Low + Close) for each raw bar.
2. HA_Open:
o The first bar’s HA_Open is the average of the first raw Open & Close.
o Each subsequent HA_Open is the average of the previous bar’s HA_Open and
HA_Close.
3. HA_High / HA_Low:
o HA_High is the max of (raw High, HA_Open, HA_Close).
o HA_Low is the min of (raw Low, HA_Open, HA_Close).
4. Carries over volume for reference (though volume isn’t used later).

5. DMI/ADX Calculation on HA Bars (pine_dmi)

 plusDI / minusDI:

o If today’s “up” > “down” and > 0 ⇒ plusDM = up, else 0.


o Compute directional movements (up = hi.diff(), down = -lo.diff()).

o If today’s “down” > “up” and > 0 ⇒ minusDM = down, else 0.


 True Range (TR):
o max of (today’s high–low, |high–prior close|, |low–prior close|).
 Smooth TR, plusDM, minusDM using rma over LEN_DM.
 plusDI (%) = 100 × ( smoothed plusDM / smoothed TR )
minusDI (%) = 100 × ( smoothed minusDM / smoothed TR )
 DX = 100 × |plusDI – minusDI| / (plusDI + minusDI)
Then ADX = rma(DX, LEN_ADX).

6. Build Your Ticker List

 If UNIVERSE == "sp500", scrape Wikipedia’s S&P 500 table to get all symbols.
 Otherwise, use whatever list you provided.
 Download daily prices (Auto-adjusted for splits/dividends) for the chosen tickers
over the specified period—all in one bulk call to yf.download.

7. Screening Loop (per ticker) to Find “FTBC” Entries

The goal is to detect a custom “entry” signal (ftbc_raw + EMA filters) on the most recent
Heikin-Ashi bar. If a ticker passes, it’s added to matches.

7.1. Early Bail-Out

 Skip this symbol if there aren’t at least min_bars of data (enough for ADX, Chandelier
Exit, and a few extra lookback candles).

7.2. Build HA Bars

 Call heikin_ashi(df) to create a small DataFrame of HA_Open, HA_High, HA_Low,


HA_Close.

7.3. Compute +DI, –DI, ADX on HA Bars

 Call pine_dmi on the HA bars.

7.4. Identify Two DMI “Crossover” Signals: B and B1

1. B (crossB):
o Yesterday, plusDI ≤ minusDI, and today plusDI > minusDI.
o In other words: a classic +DI / –DI bullish crossover.
2. B1 (crossB1):
o Yesterday, plusDI ≤ DI_THRESH (20), and today plusDI > 20 and plusDI >
minusDI.
o This is a stronger “plusDI just crossed above 20” filter plus it’s above minusDI.
Both crossB and crossB1 are boolean Series aligned to each HA bar.

7.5. Compute Chandelier Exit (CE) “Buy” & “Sell” Signals on HA_Close

1. True Range on HA = max of (HA_High – HA_Low, |HA_High – prior HA_Close|, |


HA_Low – prior HA_Close|).
2. ATR on HA = rma(TR_ha, CE_LEN).
3. atr_scaled = ATR × CE_MULT (1.8).
4. highest_close = rolling max of HA_Close over the last CE_LEN bars.
lowest_close = rolling min of HA_Close over CE_LEN bars.
(If USE_CLOSE=False, it would use HA_High/HA_Low instead.)
5. longStop / shortStop:
o At the first CE bar (index CE_LEN – 1),
− longStop = highest_close – atr_scaled
− shortStop = lowest_close + atr_scaled
o On each subsequent bar:
• raw_long = highest_close – atr_scaled, raw_short = lowest_close +
atr_scaled.
• If prior HA_Close > prior longStop, longStop = max(raw_long, prior
longStop), else longStop = raw_long.
• If prior HA_Close < prior shortStop, shortStop = min(raw_short, prior
shortStop), else shortStop = raw_short.
6. dir_arr (direction array):
o Start with +1 (“long” bias) on the first CE bar.
o For each new bar:
• If HA_Close > prior shortStop, dir = +1 (stay or switch long).
• Else if HA_Close < prior longStop, dir = –1 (go or stay short).
• Else dir = previous dir (carry over).
7. CE buy = A “flip” from dir = –1 to dir = +1 on this bar.
CE sell = A flip from +1 to –1.
These become boolean Series (ce_buy, ce_sell) indexed by each HA bar.

7.6. Check Latest Bar’s Signals

Let i = last bar index (the very most recent HA bar).

1. IsB_i = did a classic +DI/–DI crossover happen on bar i?


− isB_i = bool(crossB.iat[i]).
2. IsB1_i = did +DI just cross above 20 (and above –DI) on bar i?
− isB1_i = bool(crossB1.iat[i]).
3. Look Back Window:
− start_idx = max(0, i – MAX_GAP).
− Check if a B or B1 happened anywhere in the range [start_idx … i – 1].
− prior_B1 = any(crossB1[start_idx:i])
− prior_B = any(crossB [start_idx:i])
4. Pseudo-B_k = a combined DMI rule “true if ANY of these”:
o B and B1 happen on the same bar (isB_i and isB1_i), OR
o Today is B but B1 happened within the last MAX_GAP bars, OR
o Today is B1 but B happened within the last MAX_GAP bars.
5. “CE buy recent” = did we have any CE buy in [start_idx … i]?
− ce_buy_recent = ce_buy[start_idx : i+1].any()
6. If no CE buy just now (ce_buy_recent=False) but pseudo-B_k=True, check whether
the last CE event in [start_idx … i – 1] was a buy or a sell:
o If there was only a prior buy (no prior sell), treat it as “prev_ce_not_sell = True.”
o If there was only a prior sell (no buy), treat it as “False.”
o If there were both, compare their last indices: if the last buy came after the last
sell, “True”; otherwise “False.”
o If neither buy nor sell occurred in that window, default to “True.”
7. Count how many signals fired in the last window for bar i:
count_signals = int(isB_i) + int(isB1_i) + int(ce_buy_recent)
Then count_ok = (count_signals >= 2).
(This enforces that at least two of (B, B1, CE buy) must coincide in that 0–4 candle
window.)
8. Raw FTBC Condition (ftbc_raw):
o If pseudo_i is True (meaning B/B1 are in that 0–4 bar cluster) and any of these:

o We have a CE buy within the last MAX_GAP bars and count_ok is True (≥2 signals
total), OR
o We do NOT have a CE buy right now, but the most recent CE event in that
window was also a buy (prev_ce_not_sell = True).

At this point, if either (1) or (2) holds, ftbc_raw would be True.

❗ Exclusion Rule for B & B1 on the Same Candle ❗

Immediately after setting ftbc_raw=True, there’s a hard‐stop:

# If B and B1 both fire today, but NO CE buy AND ADX has fallen since
yesterday,
# then force ftbc_raw = False, regardless of the above.
if isB_i and isB1_i and (not ce_buy_recent) and (i > 0):
adx_today = adx.iat[i]
adx_yesterday = adx.iat[i-1]
if adx_today < adx_yesterday:
ftbc_raw = False

 When does this trigger?


1. Today’s bar has both B and B1 (isB_i and isB1_i).
2. There was no CE buy in the 0–4 candle window (not ce_buy_recent).
3. And today’s ADX is lower than yesterday’s ADX (adx_today <
adx_yesterday).
 What does it do? It overrides everything and sets ftbc_raw = False.

In other words, if B and B1 appear together on the same candle, but there is no Chandelier-Exit
buy signal in that 0–4 bar window, and ADX has not risen from the prior bar, then the script
deliberately excludes that “entry” by zeroing out ftbc_raw.

7.7. EMA Trend Filter

Only if ftbc_raw remains True after that exclusion check, the code then requires:

 The latest HA_High price must be above its 10-, 20-, and 50-period EMAs (all computed
on HA_High).
 If it passes, that ticker is appended to matches.

8. Reporting

At the very end, the script prints a list of all symbols that passed both the raw FTBC check
(including the B/B1 exclusion) and the EMA filter on the most recent HA bar.

Direct Answer to Your Question

Does this code exclude B and B1 occurring on the same candle when there is no ADX rising
and no Chandelier-Exit buy?

Yes. Right after it sets ftbc_raw = True for any B/B1/CE combinations, there’s a block that
specifically looks for:

1. isB_i and isB1_i (B and B1 on the same latest candle),


2. not ce_buy_recent (no CE buy in the 0–4 bar window),
3. adx_today < adx_yesterday (ADX has fallen instead of risen).

If all three are true, it forces ftbc_raw = False. In short, if B and B1 land together on one bar
but ADX isn’t climbing and there’s no recent CE buy, the code explicitly excludes that as a valid
signal.

Or in easy words
For each symbol, it does:

1. Skip if there aren’t at least about 50 bars of data (so the indicators have room to work).
2. Build HA bars using step 4.
3. Compute +DI, –DI, ADX on those HA bars from step 5.
4. Look for two DMI crossover signals on each bar:
o B signal: When yesterday plusDI ≤ minusDI, and today plusDI > minusDI.
o B1 signal: When yesterday plusDI ≤ 20, and today plusDI > 20 and plusDI >
minusDI.
5. Calculate Chandelier Exit “buy” and “sell” signals on the HA_Close:
o Use True Range on HA bars, smooth it over 7 bars to get ATR.
o Compute a “stop” line above or below price.
o If price flips direction across that line, it’s a “CE buy” or “CE sell.”
6. Check the very latest HA bar (call its index i):

o IsB_i? Did a B crossover happen right now?


o IsB1_i? Did a B1 crossover happen right now?
o Look back up to 4 bars (i – 4 to i – 1) to see if B or B1 happened recently.
o “Pseudo” rule: If B and B1 happened together right now, or if one happened now
and the other happened in the last 4 bars, set a flag pseudo_i = True.
o Did we get any CE buy in those last 5 bars (i – 4 to i)? If yes, ce_buy_recent =
True.
o If there is no CE buy right now but pseudo_i is True, check among those last 4
bars whether the most recent CE event was a buy (so we know if
“prev_ce_not_sell” is True).
o Count how many of (B, B1, CE buy) happened in that 5-bar window. If 2 or 3
of them line up, that passes one of our conditions.
o Raw FTBC =
 If pseudo_i is True and (either CE buy happened in the last 5 bars and at
least two signals matched,
 Or there was no CE buy now but the last CE action in that window was
also a buy), then ftbc_raw = True.
7. Now apply the Exclusion:

o If B and B1 both fired on this same bar (isB_i and isB1_i),


o and there was no CE buy in the last 5 bars,
o and today’s ADX is lower than yesterday’s ADX,
o then force ftbc_raw = False.
In plain language: if both B and B1 appear together on one candle, but ADX
didn’t go up and you didn’t get any Chandelier-Exit buy signal recently, throw out
this signal.
8. EMA Trend Check:
o Compute 10-, 20-, and 50-bar EMAs on HA_High.
o If today’s HA_High is above all three EMAs, ema_ok = True.
9. Final Decision:
o If ftbc_raw is still True and ema_ok is True, add this symbol to matches.
8. Print Results

 At the very end, it prints all tickers that passed both the raw FTBC test (including the
B/B1 exclusion) and the EMA trend check on the latest bar.

Super-Simple Answer to the Exclusion Question

Does it kick out cases where B and B1 happen on the same candle, with no ADX rising and
no CE buy?

Yes. Right after the script thinks “Okay, B and B1 and maybe CE all line up,” it asks:

 “Did B and B1 both fire on the same bar?”


 “No CE buy in the last 4 bars?”
 “Is today’s ADX lower than yesterday’s?”

If all three are true, it cancels the signal (ftbc_raw = False). So it’s explicitly excluding any
candle where B and B1 happen together but ADX didn’t go up and there was no recent CE buy.

CTBC entry satisfies the following entry.

1. Fractals are calculated with the following logic. Green and red fractals are calculated on Daily
Heiken ashi.
2.
3. CTBC entry occurs when
a) HA candle (with a body or wick) cuts the previous green fractal for the first time.
Subsequent HA Candle cutting the same fractal are not counted
b) When a happens, Daily HA high > 10 EMA & Daily HA high > 50 EMA & Daily HA high >
20ema)
c) When A and B happen, Daily Heiken ashi ADX > previous ADX (flat or rising at entry) & Di
positive > 20.
4. Last CExit signal was not a sell signal.
5. Also for the entry heikenashi candle i under consideration,
6. ADX for i > ADX for i-1
7. ADX for i-2 < ADX for i-3
North star: I have a goal. Yearly and monthly goals. And Milestones. ANSWER daily how is all these
helping achieve the goal.

Ideas

Trending Tickers dashboard/scan (where a possible DTBC entry can come)

1. MTBC
2. MTBC and WTBC

Entry quality score ranking for given ticker and date

If I enter a ticker and a date. It calculates if it’s a CTBC, FTBC. Whether is on MTBC and WTBC.

It calculates ranking score. Based on quality and throws back.

Find a today’s tbc tickers and score ranking.

1. Find DTBC. Ranking Score


2. Find MTBC. DTBC. Ranking score
3. Find MTBC. WTBC.DTBC. Ranking score
4. Find aligned MTBC. WTBC. DTBC. Ranking score.
8. DTBC can be FTBC or CTBC

Back tester versions

1. For each of the above, for a given ticker/a universe of tickers, do a back test and capture major
performance KPIs

(use hourly CE exit sell as a square of trigger)

BT Idea sprinkles

1. For existing MTBC, find FTBC and/or CTBC. For ranking of entry above a certain threshold,
Entry in hourly HA when last daily HA fractal is cut. SL CEexist buy hourly candle ka low.
Overall exit when on hourly HA , Sell is given by CExit. (Another variant. if DTBC still exists,
enter again in hourly CEexit buy and exit in hourly CEXIT sell)
2. Two approach to moving SL. One is mentioned above. Hourly buy and sell. CExit. The other is
hourly entry. Moving SL at hourly HA 50 EMA below close.
3. Do the same for aligned MTBC, WTBC and D FTBC/CTBC, above ranking threshold
4. Do the same for aligned or MTBC, WTBC and DTBC/CTBC above ranking threshold.
5. M, W, D is essentially 20:5:1 for stocks | W, D, 3h is 5:1:1/8 (or 40:8:1) for currencies |
15m, 5m, 3m for BNF family | Similar for commodities | For these have the above three
idea based back test to find optimal strategies similar to 1 to 3.
6. For BNF family once you have optimum strategy, write a constantly running program real
time that finds FTBC and CTBC entries with scores and provides recommendations for entry
and exit. All in real time.
7. A BT system with a pipeline that uses historic options data as well with the optimized BNF
strategy to arrive at a PNL and key strategy performance metrics. Pure option buying.
8. A similar system for BTCUSDT option BT.

ML usage.

1. ML to arrive at a better prediction (ranking score for an entry). A self-learning system which
takes into consideration multiple parameters and indicators to arrive at >70% success ratio ,
SQN > 2.5 system (hint: ask o3 itself how would you go about creating such a system with all the
background)
2. Once the strategies are optimized for stocks, BNF, currency, commodities. Use ML to further
optimize the success KPIs. The end goal is to create a system that can be scaled to a hedge fund.
3. Agent and agentic AI that does above 2. May be use RL
4. ML super charge. ML ideas as highlighted.

10 Ways Machine-Learning Can Super-Charge Your “TBC” Indicator


Suite

(from quick-wins to blue-sky ideas)

# ML Idea Why It Helps First Steps / Tools

Your TBC rules use hand-picked numbers (ADX ≥


scikit-optimize,
DI-, Donchian 20, EMA 10/20/50). A Bayesian-
1 Threshold-Optimizer hyperopt, walk-forward
optimization loop can learn per-universe or per-
CV on stored feature DB
regime values that maximise out-of-sample Sharpe.

Treat every weekly/monthly bar as a row, add the


raw indicators plus binary flags (“tight-TBC fired”), LightGBM / CatBoost; label
Meta-Classifier on
2 then train a model to predict “top-quartile forward = next-4-week excess
Top of Rules
return”. The model learns when to trust or ignore return
the rule.

A hidden-Markov model or self-supervised


hmmlearn, tslearn, or a
transformer segments the market into
3 Regime-Detector simple k-means on VIX,
bull/bear/volatile regimes. Only trade tight-TBC in
breadth, macro
regimes where hit-rate is high.

Adaptive OpenAI Gym-style


Keep your entry logic, but let an RL agent decide
Stop/Target via environment; stable-
4 exit (take-profit, stop-loss) and position size to
Reinforcement baselines3, vectorised
maximise risk-adjusted P&L.
Learning back-tests
# ML Idea Why It Helps First Steps / Tools

Go beyond ADX & EMAs: auto-generate hundreds


Feature Factory with h2o.ai AutoML, auto-
5 of TA signals, lagged returns, macro & sentiment;
AutoML sklearn, featuretools
AutoML selects the best.

Cluster stocks by fundamental style or price-action UMAP + HDBSCAN; graph


Cluster-Specific
6 embeddings; train separate models/thresholds per neural networks on
Models
cluster → avoids “one-size-fits-all”. correlation graph

GANs or time-series VAEs create additional training TimeGAN, ts-augment,


Synthetic-Data
7 scenarios (crashes, melt-ups) so the model learns evaluate with DTW
Augmenter
rare-event behaviour without leaking real data. distance

Instead of binary signals, use gradient-boosting to


CatBoost ranking
Market-Cap-Aware rank all tickers by probability of outperforming and
8 (“YetiRank”), integrate
Ranking Model weight predictions by log-market-cap to reduce
into portfolio optimiser
small-cap noise.

SHAP values on the meta-classifier show which shap, eli5, store


9 Explainability Layer inputs (ADX slope, Donchian distance, macro) drive explanations alongside
each pick → trust & debugging. signals

River (online ML),


After each new bar, automatically append latest
Continuous-Learning incremental LightGBM,
10 data, re-train lightweight models, re-score
Agent orchestrated via
universe, and push trades—no manual retrains.
Prefect/Ray

Quick Proof-of-Concept Recipe


# 1. Build a feature table
X = [
"HA_high/EMA10", "HA_high/EMA20", "HA_high/EMA50",
"plusDI", "minusDI", "ADX",
"ADX_diff", # ADX[i] - ADX[i-1]
"Donchian_dist", # HA_high - Donchian_upper
"tight_TBC_flag", "loose_TBC_flag"
]
y = "forward_4w_return" # target label

# 2. Train/test split with Purged Walk-Forward CV


from sklearn.model_selection import TimeSeriesSplit
from lightgbm import LGBMRegressor
model = LGBMRegressor(...)
# loop TimeSeriesSplit, keep last fold as live holdout

# 3. Calibrate probability / expected return


# 4. Rank top-N tickers each week, feed to portfolio module
Guardrails & Gotchas

1. Look-ahead bias – always shift labels and recompute indicators only with past data
during CV.
2. Over-fitting small universes – prefer ridge/boosting over deep nets unless you have
thousands of samples per symbol.
3. Data drifts – schedule monthly re-training and compare SHAP baseline to detect drift.
4. Execution costs – when meta-model suggests frequent trades, fold slippage/fees into
label.

Holistic Vision

 Rule-Based Core gives human-interpretable entries.


 ML Layers sit on top to adapt thresholds, filter false positives, optimise exits, and size
positions.
 Agents & Orchestrator (outlined earlier) ensure the data, training, scoring, and
deployment loop runs automatically—while you observe dashboards and dig into SHAP
explanations for new edges.

This hybrid of deterministic signals + learning layers often beats either approach alone: you keep
the domain intuition of HA/ADX/Donchian, but gain data-driven adaptability and continuous
improvement.

10 Machine-Learning Ideas — Explained Like We’re Chatting Over Coffee ☕


Why It Makes Your Trading Rules
# Idea in Plain English
Better

Instead of guessing things like “Donchian 20” or


1. Tune the Your rules fit your market and
“ADX ≥ 25,” let a computer try dozens of
Numbers for timeframe, not someone else’s
combinations and tell you which mix worked best
You blog post.
in past data.

Keep your rule as the doorman, but train a small


2. Teach a
model that says “let this trade in / keep that one Filters out the junk trades while
“Bouncer”
out” based on extra clues (volatility, gap size, keeping the good ones.
Model
volume).

A model spots when we’re in bull, bear, or choppy


3. Market-Mood Avoids firing long breakouts in a
phases. You only use TBC rules in moods where
Detector crashing market.
they historically win.

You stop leaving money on the


4. Smart Exit Entries stay rule-based; an RL agent learns the best
table—or bleeding it away—
Robot time to take profits or cut losses.
because of rigid exits.
Why It Makes Your Trading Rules
# Idea in Plain English
Better

5. Automatic Software cooks up hundreds of technical ratios and Finds hidden edges you’d never
Feature Kitchen picks the tastiest ones for a prediction model. think to hand-code.

A breakout in a sleepy utility


6. Stock Cluster similar stocks (e.g., fintech, utilities) and
shouldn’t be judged by the same
“Tribes” have mini-models for each tribe.
yardstick as a meme-tech stock.

7. Synthetic A neural net invents extra “what-if” price series Prepares the strategy for rare
History (flash crashes, melt-ups) to train on. events it hasn’t seen yet.

8. Rank Instead A model scores every stock 0–100 on “chance to Lets you size bets by confidence,
of Yes/No outperform,” then you buy the top 20. not just thumbs-up/thumbs-down.

9. Why-It-Works Explainer tech shows which indicators mattered for You gain trust and can tweak rules
Charts (SHAP) each prediction. with insight, not guesswork.

10. Auto- After each new week or month, new data is added The edge stays fresh without you
Retrain Loop and the model quietly updates itself. babysitting it.

Bottom line:
Machine learning doesn’t replace your trading logic—it polishes, filters, and adapts it so the
signals stay sharp and relevant as markets change.

Agentic usage.

An Agentic-AI Blueprint for Your Multi-Universe, Multi-Signal Pipeline


(How to turn the scripts you’ve built into a self-orchestrating research
“factory”)
Specialized
Layer Core Responsibilities Typical Tools / Skills
Agent

- Scrape & normalize constituents (S&P


Python +
500, R2000, NASDAQ, NYSE, NSE)-
1. Data- Universe BeautifulSoup/Requests, fuzzy-
Resolve symbol clashes (BRK.B → BRK-B,
Universe Layer Curator matching libs, DuckDB for fast
etc.)- Append static fundamentals
joins
(market-cap, sector, float)

- Pull multi-resolution OHLCV


Price (daily/weekly/monthly) via batched Async HTTP, pandas-market-
Downloader yfinance or Polygon API- Retry, calendars, Redis cache
checkpoint, delta-update only new bars

2. Feature- HA-Bar Recursively compute Heikin-Ashi candles Numba/Polars for speed


Specialized
Layer Core Responsibilities Typical Tools / Skills
Agent

Engineering
Generator in Pandas + vectorized NumPy
Layer

Produce DI+/DI-, ADX, CE long/short,


Indicator TA-Lib bindings or custom
Donchian bands, multi-EMA, etc., on any
Forge numba kernels
timeframe

Evaluate: FTBC, CTBC, weekly-loose TBC,


3. Screening & TBC weekly-tight TBC, monthly-tight TBC. Rule engine (jsonlogic) to
Signal Layer Evaluator Emits boolean “events” per avoid hard-coding
symbol/timeframe

Auto-flag suspicious data (split gaps,


Edge-Case crazy ATR, missing volume) and hand the
Z-score heuristics
Sentinel ticker back to Price Downloader for re-
pull

Cron-style DAG: • Universe → Prices →


4.
Task Features → Signals• Knows when
Orchestration Prefect / Airflow or Temporal
Scheduler monthly/weekly bars close (exchange
Layer
calendars)

Guarantees each downstream agent


Dependency waits for upstream data freshness (e.g.,
Prefect’s state machine
Resolver “don’t run weekly-TBC until Friday close
+ 15 min”)

Persist feature tensors & signal flags into


5. Storage & Vector Store
DuckDB or Parquet partitions; expose DuckDB, Polars
Query Layer Keeper
SQL-like API to downstream analytics

Given a symbol + date, pull the stored


6. Explanation LLM Insight tensors and narrate why it triggered
LangChain + RetrievalQA
& Chat Layer Agent (e.g., “HA-High pierced Donchian upper;
ADX climbed 3 pts.”)

When a new rule set appears (e.g.,


7. Research- Backtest “Weekly ultra-tight TBC”), auto-build a Zipline / VectorBT; pydantic
Ops Layer Autopilot backtest template, run on history, spit schema for strategies
out metrics

Generates PDF/HTML dashboards each


Report Jinja2 + WeasyPrint / Plotly
weekend: heat-map of triggers, top-N by
Crafter Dash
market-cap, sector breakdown
Specialized
Layer Core Responsibilities Typical Tools / Skills
Agent

8. Guardrails & Anomaly Alert Slack/Email if download failure > X Prometheus + Grafana;
Monitoring Watchdog % or if signals drop to zero unexpectedly automations.create() for alerts

Cost Monitors API quota, toggles secondary Tiny RL loop or policy


Controller data source if limits near thresholds

How Agents Interact (Orchestration Graph)


flowchart TD
A(Universe Curator) --> B(Price Downloader)
B --> C(HA-Bar Generator)
C --> D(Indicator Forge)
D --> E(TBC Evaluator)
D --> F(Edge-Case Sentinel -->|flag| B)
E --> G(Vector Store Keeper)
G --> H(LLM Insight Agent)
E --> I(Report Crafter)

 Self-Healing Loop: Edge-Case Sentinel detects anomalies → re-queues symbol for fresh
download.
 Explainability Loop: LLM Insight Agent queries Vector Store → answers “Why did
TSLA enter weekly-tight-TBC?” in natural language.

Implementation Hints

1. Choose an agent framework


o CrewAI / LangChain Agents for modular, tool-using sub-agents.
o Wrap heavy numeric work in pure Python + NumPy/Polars functions; expose
them as tools.
2. Stateless vs. Stateful
o Keep expensive price/feature arrays in DuckDB/Parquet (stateful).
o Agents remain mostly stateless, pulling slices on demand.
3. Parallelism
o Price downloads & indicator calculations are “embarrassingly parallel.” Spawn a
pool of workers (asyncio + httpx or Ray).
4. Scheduling
o Use Prefect flows with schedule="0 22 * * Fri" for weekly run; an “end-of-
month” cron for monthly.
5. Prompt-Engineering for Insight Agent
6. You are a quant explainer. Given JSON {symbol, date, features}, explain
in
7. <120 words why it satisfied weekly-tight-TBC. Reference DI+/DI- values,
8. Donchian levels, ADX slope, EMAs.
9. Human-in-the-Loop
o If Edge-Case Sentinel sees >5 consecutive NaNs for DI, escalate to Slack channel
“#data-issues”.
10. Optional Reinforcement Loop
o Backtest Autopilot logs risk-adjusted return. If Sharpe < 0.5 three months in a
row, agent suggests rule tweaks.

End-State Vision

A self-driving research platform where:

 Sunday night the scheduler fires, all universes refresh, features recompute, and
CSVs/plots hit your inbox.
 A Slack bot (/explain TATA 2025-06-06 weekly-tight) calls the LLM Insight Agent
and returns a digest.
 You iterate on new signal recipes in natural language (“tighten Donchian look-back to
15”)—the system autogenerates code, backtests, and reports.

That’s the power of agentic AI: decomposed, collaborative micro-experts that keep your data
clean, signals live, explanations transparent, and research velocity high.

Big-Picture Takeaways (in plain English)

1. Turn your scripts into little “workers.”


o Instead of one long notebook, split the job up into small, specialized bots (agents).
o Each bot has a single purpose: download prices, clean tickers, build Heikin-Ashi
bars, calculate indicators, flag signals, write reports, etc.
2. Link the bots in a relay race.
o Universe Bot → Price Bot → Indicator Bot → Signal Bot → Report Bot
o When one bot finishes, it hands its output to the next—like passing a baton.
3. Make the system self-healing.
o A “watchdog” bot checks for missing data or weird values.
o If something breaks (e.g., a download fails), it automatically retries or alerts you.
4. Keep the heavy data in a central store.
o All monthly/weekly candles, indicators, and signals live in a fast database (e.g.,
DuckDB or Parquet files).
o Bots just pull the slices they need; nothing is lost between runs.
5. Add an “explainer” chatbot on top.
o Ask: “Why did TSLA trigger a weekly tight TBC last Friday?”
o The chatbot fetches the stored indicators and answers in plain English.
6. Automate the calendar.
o A scheduler runs the weekly scan after markets close each Friday and the monthly
scan at month-end—no manual clicks.
7. One-click reports.
A report-writing bot emails a CSV of new signals and a quick PDF dashboard
o
every weekend.
8. No more copy-paste hell.
o You describe a new rule (“tighten Donchian look-back to 15”), and the bots
regenerate the code, rerun back-tests, and update the reports automatically.

In short: break the workflow into clear jobs, give each job to an automated helper, connect
them, and let the system run itself—while still being able to chat with it and get
explanations whenever you need.

Data collection for practise.

1. For a given ticker, when MTBC exists, for the duration of each MTBC, capture screenshots on
Monthly, weekly, daily and hoursly chart (from saved specific layouts that I have on trading view
which has indictors etc ). Write a python code that captures screenshots and extracts it in an
evernote file. You’ll have a “one-click” (or “zero-click,” if scheduled) solution: Ticker in →
Screenshots out → Journal ready.
2. One way to calculate strength of momentum is acceleration. On DTBC from the last fractal cut
to the first orange (when DTBC ends), calculate the second order derivative of ADX and rank
basis that value

Random idea

Take all max data for a ticker.

Daily candles.

Hourly candles.

Convert it to daily HA, hourly HA.

Calculate CE exit on daily and Hourly HA.

Calculate ADX and ADX DI on daily HA and Hourly HA.

Calculate daily fractals on HA.

Using the above,

Find FTBC HA dates.

For each date, find which hourly HA candle cuts the previous daily HA fractal. This is the trigger hourly
HA candle. Use the next hourly candle (normal) open as entry. SL on the low of normal candle which cuts
the daily fractal (hour wise this corresponds to the same hourly HA candle that cuts the previous daily
fractal. Trigger hourly HA candle). Buy 4 shares. Risk r is as per the initial SL. Normal exit = Be in the
trade until on the hourly HA, 10 EMA crosses and falls below 20 EMA or CE exit sell on HA (whichever
comes first) Once you reach 1 R multiple, peel of 50% of quantity. When it reaches 3R, peel off another
25% and for the rest wait for the normal exit.

approach

 Check if FTBC HA dates are being captured correctly.


 Check if Hourly CE exit buy and sell arebeing captured correctly
 Check if , on the day of FTBC DATE, the hourly HA candle which cuts the previous daily HA green
fractal is being captured corrected.

If these three are are getting captured correctly, then we can go ahead and run the above.

Ultimate approach combining everything to create a final end system from scratch

Final End goal: Create system/systems that are among the best in the world / world class with the
following features ideally.

1. Net expectancy > = 0.2 R


2. SQN > 2.5
3. Sharpe > = 3 (even >2.5 id tolerable)

How do we go about achieving this.

Core strategy:

1. MTBC x WTBC x DTBC


Confirmative pre-req with Existing MTBC & WTBC (existing or New) | Entry Trigger being DTBC*
2. Aligned MTBC x WTBC x DTBC
Confirmative pre-req with Existing Aligned MTBC & WTBC (existing or New) | Entry Trigger being
DTBC*

*Where of course each of these will be defined.


*DTBC has three types  FTBC, CTBC, and Fractal CTBC
First, we will create and check/vet modules needed here.

Define DTBC trigger types

9. FTBC (ftbc – coincide and coincide)


10. CTBC
11. Fractal CTBC

Create python codes for

 U = Identifying each of these DTBC trigger types from a universe of existing tickers today

 T = Identify each of these DTBC trigger types across a timeline in a given ticker
 2T = Identify each of these DTBC trigger types between two given dates
 G = check on a give date

Basically for the grid below visually vet if the code works properly and the signals identify the right
triggers on the chart.

G U T 2T
FTBC
CTBC
Fractal CTBC

 Then visually vet C.EXIT buy sell triggers across timeframes M, W, D, H, 2H


 Visually vet HA. Fractal cut across timeframes M, W, D, H, 2H

All of the above need to be visually vetted to ensure code gives proper triggers that matches with the
graph chart triggers

Now going for the Special trade play  entry, SL, Hold/moving SL and exit all in smaller time frame

On the day of DAILY TRIGGER which is essentially FTBC or CTBC or Fractal CTBC (along with certain other
conditions which will be explained later), the following logic out to be followed.

1. Entry:
a. Shift to Hourly HA and Hourly Normal candle timeframe
b. Immediately capture hourly HA candle which cuts the previous Daily Green fractal
c. Use the corresponding next hourly candle (normal not HA) open as the entr
2. Initial SL
a. CExit sell signal on normal candle.
3. Holding period/Moving SL/Exit. (now this has multiple modules)
a. Module 1: Entry same as above. Then for so long as DTBC exists (HA ADX daily rising or
flat), use CExit on the normal candle (second scenario HA candle) close as the exit. 50
EMA as the moving SL. For exit the CExit close always usually comes before 50 EMA
down crossover. At different R multiples, peel different quantities (captured in the table
below
b. Module 2: Entry same as above. Moving SL. 50 emA hourly HA. Exit only when normal
hourly candle close down crosses 50 EMA hourly from above. At different R multiples,
peel different quantities (captured in the table below

Ofcouse visually vetting at every stage is needed

Table for peeling

Peeling style 1 Peeling style 2 Peeling style 3


Module 1 50% at 1R  50% at 1R  33% at 1R, 33% 50% at 1R 
25% at 3R  25% at 2R  at 2R and rest Rest at exit
Rest at exit Rest at exit at exit
Module 2 50% at 1R  50% at 1R  33% at 1R, 33% 50% at 1R 
25% at 3R  25% at 2R  at 2R and rest Rest at exit
Rest at exit Rest at exit at exit
Across all of the above peeling styles, after first peeling at 1R, raise SL at to break even

For each of the modules and peeling styles,


1. Run a code to visually vet correctness
2. Run code across multiple tickers to optimize for net expectance, SQN,
3. Run 1 and 2 by replacing H with 2H as time frame

Final step

 Aligned MTBC is when for Monthly HA, ADX DI + > ADX DI –


 MTBC is when for Monthly HA, ADX DI + > ADX DI – and ADX current monthly
HA > ADX previous monthly HA and Monthly HA is green

Once all of the above starting from goal has been vetted,

Here’s the algo to be followed for core strategy–

MTBC x WTBC x DTBC or Aligned MTBC x WTBC x DTBC


1. Check for DTBC trigger (any of the FTBC, CTBC, fractal CTBC is fine)
2. If found, check if there is a corresponding existing or a new WTBC on the weekly HA?
3. If yes, check for MTBC or aligned MTBC on the corresponding monthly candle ?
4. If all of the 1, 2 ,3 are yes then shift to smaller time frame (H or 2H) and execute special trade
play

Tweak all of the above until you reach your end goal in terms of system parameters.

You might also like