We do not hold crypto. We trade the movements. Over the past year, we poured thousands of hours into pushing past the noise. The systems showcased here are legacy—our past failures, rewrites, and abandoned models. We aren't sharing our alpha, we are sharing our velocity. This is how far we've come, to show you how far we'll go.
A look at the sheer volume of engineering, data, and hard lessons locked in our archives.
Initially, we believed NLP sentiment analysis on mainstream news could beat the market. We attempted to scrape CBC News to predict trends, but quickly learned markets move radically faster than the news cycle. We then tried simple SMA crossovers. The harsh reality of the "Chop" hit us—sideways markets trigger frequent false crossovers, and exchange fees bleed out any small gains. The remnants of these earliest, naive strategy scripts live in the Strategy Research folder.
We attempted to front-run meme coin launches on the Solana blockchain by signing manual transactions directly over Helius RPC setups. Blindly relying on AI snippets to map complex low-level networking constraints ended up burning hours of development with minimal upside. We learned that sheer execution speed means nothing if you lack foundational infrastructural understanding.
Seeking scalable execution, we bought "proven" commercial algorithms via TradingView and hooked them to the Kraken API. They were highly over-optimized for backrests but failed miserably in the live market. High lag in webhooks and bad stop-loss handling wiped out our early test capital ($50 tuition fee) almost instantly using Flask webhook servers.
We completely discarded commercial algorithms. Shifting to custom Python via Pine Agents, we broke free from TradingView's 2.5-month data limits. We wrote a new asynchronous Quart backend from scratch to prevent I/O blocking across multiple simultaneous token updates. This unlocked reliable 3+ year backtests and gave us the confidence to drop naive ML in favor of pure deterministic logic filtering.
We abandoned naive indicators early on. By developing custom NautilusTrader backends hooked into
asynchronous WebSockets via Quart, we solved the I/O blocking problem. Core to our architecture is
the AccountState.Lock, an Asyncio mutex ensuring that our Signal Generator and Risk
Manager never operate on conflicting real-time tick data, guaranteeing zero race conditions during
state reconciliation on Hyperliquid.
On the Machine Learning front, we initially dug deep into classification problems. We rejected basic price targets and used Stratified Training Splits to prevent "Zero-Class Convergence." We ran heavy Optuna trials for our TensorFlow Metal LSTM models, employing rolling normalizations (N=200). Ultimately, we learned that the complexity of ML models wasn't yielding robust execution. Consequently, our newer versions have entirely moved on from ML approaches in favor of precise, deterministic filters.
While the models we trade with today remain completely offline, this archive demonstrates our commitment to scaling institutional-grade execution algorithms.