Background: 7 years in finance, with both undergrad and grad degrees in the field. Came in expecting a practical application of ML to trading. What I found instead was a lesson in cognitive dissonance.
Who is this course for?
If you're comfortable with Python, pandas, and NumPy, and don’t mind skipping over financial logic, you’ll likely do fine. The code isn’t difficult. But you'll spend most of your time deciphering vague, bloated assignment descriptions and wrestling with formatting quirks.
A finance background is not required—in fact, it might just make the experience worse.
Assignments: When optimal isn’t optimal
The course's definition of "optimal portfolio" is... creative. Rather than using expected return models or basic MPT, you're encouraged to aggregate stock prices and derive your allocation that way.
I ended up with an "optimal strategy" that went all-in on JPMorgan during a recession. If you think that's valid, great—now go invest your own $1 million into JPM during a crash. If even you wouldn’t follow your strategy in real life, is it really optimal?
Assignments prioritize passing test cases, not sound reasoning. Following correct finance logic may cost you points. Following their logic may cost you sanity.
ML Without Price?
In one project, you're told to use ML to predict buy/sell/hold decisions without modeling price first.
That’s right: skip the part where prices are predicted—just go straight to decision-making. It’s like baking without checking if you have ingredients. Price is the foundation of all trading logic, yet this course pretends it’s optional.
The JDF Format
Reports must be submitted in a format called JDF—an internal invention that mimics the rigidity of a fake academic template. It’s not used anywhere else and exists solely to waste your time enforcing things like section alignment and label names.
If they spent half as much time updating course content as they did enforcing this format, the class would be far more valuable.
Exams: A case study in suffering
Exams are long, confusing, and oddly phrased. The questions don’t test understanding so much as your ability to interpret convoluted logic under time pressure. Most are stats-heavy and unrelated to trading.
Why are the exams never updated? Here’s what the course team says:
“We reuse tests semester-to-semester… not because we’re lazy, but because it gives us a consistent barometer to assess how different semesters are doing, and it gives us more data to conclude that our tests are valid measures of outcomes. Even if assignments stay the same, the qualitative and subjective nature of assignment grading makes them a weaker barometer... and they will change, so that makes them even less reliable. It's important we assess whether or not we're making things better semester to semester.”
Teaching, or just task monitoring?
The original instructor clearly cared. But since he left, the course feels held together by minimal TA oversight. Interaction is limited, grading is rigid, and feedback is sparse. There’s no real sense of academic engagement.
And as we say: Pay peanuts, get monkeys.
Final Verdict:
This course teaches just enough ML to pass test cases, but too little finance to be credible. It might help you practice pandas and numpy. It will not help you understand trading. Nor will it prepare you for roles in quant, fintech, or investment analysis.
If you're just here for credits and have a high tolerance for unclear expectations, you'll probably do fine.
If you're here to learn anything meaningful about the intersection of ML and trading—look elsewhere.
(This review was edited and refined with the generous help of our dear friend ChatGBT. If you’re unhappy with it… well, there’s not much I can do about that either.)