Somehow, AI Is Still Terrible at Stock Market Analysis
Bard vs ChatGPT for Stock Market Analysis

I recently tried to use the 2 most popular AI tools to analyze stock market data in the hope of making it easier for me to identify winning investment opportunities. This project stemmed from my interest in the stock market in general. I have spent a good chunk of my time in the past few years writing code that analyzes stock market data, and I wanted to see if AI could do better.
This project also stemmed from seeing a lot of content about making money on the stock market with the help of AI.

I found that most of this stuff is baloney, and in this article, I will go over the (multiple) obstacles I came across while trying to make AI understand the stock market. I will compare the performance of ChatGPT from OpenAI and Bard from Google, through 4 main axis:
- Grabbing data
- Understanding data
- Doing basic maths on the data
- Identifying patterns in the data
Note: none of the insights contained in this article should be considered investment advice, never invest more than you can afford to lose
1. Grabbing data — Fail
ChatGPT
In the case of ChatGPT, this part of the experiment was cut short, because it doesn’t have knowledge of our world post 2021. Therefore, grabbing stock market data after that date is simply impossible. I did try to fetch data from before 2021, but nothing worked.


Bard
Using Bard was more interesting for this part of the experiment because, unlike ChatGPT, this AI has access to the most recent world events and data since it is developed by Google, which has our whole world stored on a huge database.
At first, I was happily surprised by the results from Bard, even impressed:

The data is displayed in a nice table without even asking, and you have the option to export it straight to Google Sheets, that’s very impressive. But as I double-checked the data, I realized there was one major problem: it was completely wrong.

I can’t figure out why this happens, it looks like the AI is basically making up data, and I haven’t been able to identify any logic or any source for this. It’s not even that the AI grabs the data from a different period and mistakenly shows it as the data from 2019 to 2020, it’s just completely, unequivocally wrong.
I then tried to help the AI by showing it where it could grab the latest data for the AAPL ticker like I did with ChatGPT. I had higher hopes this would work here, because unlike ChatGPT Bard can rely on the indexing power of Google. I thought it could potentially understand URLs and even read source code, making it easier to extract the right data. Instead, something crazy happened.

Not only did the AI return some data (key word here is some), but it also showed me the snippet of code it said it used to grab the said data (in yellow). The code snippet uses the Python language, which I’m very familiar with since I use this language to write scripts as well.
I ran the Python code on my laptop, only tweaking it to show the output by adding one line to it. It turned out that the URL from yahoo finance was returning a 404 when trying to fetch the data, due to content being dynamically loaded on the website.
The crazy part about this was that even though the script couldn’t grab the data, the AI still returned a table (in red), which was again 100% wrong. I have no idea where it found this or how it grabbed it, it just doesn’t make any sense.
As we can see, grabbing the data was a big fail for both AIs, so I decided to help both of them even more by feeding them the data myself.
2. Understanding data — 50/50
ChatGPT
In order to feed the data to ChatGPT, I downloaded the year-to-date AAPL historical data with weekly intervals from yahoo finance as a CSV file.

I then copy-pasted the raw data as a prompt and hit send. I did this in a new chat without providing any context, just to see what the AI would say. I was happily surprised:

I informed the AI this was data for the AAPL ticker and tried to fetch data from one specific row, the week of 2023–04–10:

Yay! This was the most accurate output I got from an AI since the beginning of the experiment. Without yet making calculations with it, the AI was able to understand the data. I was even more excited to try the same thing with Bard.
Bard
In order to feed the data to Bard, I used the same approach as with ChatGPT. I copy pasted the data into a new chat and waited for the output:

I didn’t ask the AI to do so, but it once again laid out the data in a nice table for me, and again with the option to export it to Google Sheets right away for further analysis. Like with ChatGPT, I made sure the AI understood what it was dealing with.

Again, the answer provided by Bard is 100% wrong. The closing stock price for the ticker AAPL on the date 2023–04–10 was $165.210007. I looked up the data returned by the AI in the original .CSV file, and it is from 2022–10–10.

The scary part is that although the AI provided 100% wrong information, it shows the output as if this was 100% right, even replacing the date from the wrong data with the date from my prompt.

I tried once more to understand what had happened there and thought that maybe something was amiss with the date formatting (although ChatGPT was fine with it). Maybe Bard thought that 2023–04–10 meant October 4th, 2023 (this is a more common way of writing dates in Europe) and not April 10th, 2023. So I tried to explain, but it got so confused it was embarrassing.


In the end, while ChatGPT understood what it was dealing with right away, I couldn’t get Bard to understand the data on this. I moved on to the next part of the experiment.
3. Basic maths — Interesting results
Based on the answers I had gotten from both AIs up to that point, I was pretty sure this was going to be the most challenging part.
The goal here was to get the AI to calculate the change in price from one week to the next, a very basic mathematical operation:


ChatGPT
Here, the AI understood my request (in green) right away and provided accurate data. Pretty impressive:

Bard
Based on previous answers from the AI, my hopes that Bard would get this right were pretty low, and it only went downhill from there. When I asked Bard the exact same question I had asked ChatGPT, it again came up with an accurate way to get to the answer, written in Python. But there were 3 major problems:

- The code was missing arguments to get to the right answer. It only included data points for 13 weeks while the original dataset included dozens more.
- The example answer it gave was wrong and vague. It didn’t include which year the AI is talking about. Regardless, the change from one week to the next around those weeks was not 5.5%, whether in 2022 or 2023.
- The source the AI linked to was beyond bizarre. At the bottom of the answer, right where the AI says the output of the code would show (it didn’t), it links to a random source that has nothing to do whatsoever with any of the prompts I entered before:

This is a website for an animal rental program around goats, from 2022. I have no idea what this is nor how the AI could have possibly come up with this as a reliable source to calculate weekly percentage change in the Apple stock price.
For Bard, this was yet another major failure.
4. Finding patterns — Not there yet
At this point in the experiment, it was pretty clear that Bard was out of the race, so I only tried the following with ChatGPT. The goal was to try and identify some basic trends and patterns. Here are a few examples of the prompts I used along with their answers:



What to learn from all this
90% of the time during this experiment, I found that it would have been faster to code the solution myself rather than ask the AI. The biggest downside of AI at the moment is that you have to be really precise and explain things if you want to get accurate answers. And still, the AI will fail you many times.
While I do believe AI will revolutionize the way we do everything (and the change is already underway), it’s just not there yet. It’s almost as if you have to be an “AI expert” to understand how AI works and how to interact with it. That’s one of the reasons there is a big market for “AI prompts”, tech guys and content creators selling bundles of hundreds of prompts in order to help people better interact with AI and get good results from it.

Bard or ChatGPT?
Going into this experiment, I honestly thought Bard would win, specifically because it has access to Google’s huge computing and indexing power. But in the end, Bard’s performance was more than underwhelming, and while both AIs started off pretty equal, ChatGPT quickly took the lead and stayed ahead for the whole race.
Not only did Google’s AI seem to fabricate answers when it couldn’t find the right data, but it also linked to completely random sources as explainers and didn’t understand basic requests, even when I tried to explain to it how wrong it was. ChatGPT on the other hand was very fast to understand the data, and it was also a lot more accurate in its results.
For both Bard and ChatGPT however, I found that coding the answer myself would have been faster than asking AI in the first place. Although this was an interesting experiment, I will not move forward with using AI to analyze stock market data in the future. I prefer to stick to my own coding skills, and to my own brain.
I’m probably not as knowledgeable as AI, but at least I’m quicker to understand, at least for now, until the robots take over…
I made this ➝ 50 interviews on productivity, 150+ pages of content, 100% free.
Subscribe to DDIntel Here.
DDIntel captures the more notable pieces from our main site and our popular DDI Medium publication. Check us out for more insightful work from our community.
Register on AItoolverse (alpha) to get 50 DDINs
Support DDI AI Art Series: https://heartq.net/collections/ddi-ai-art-series
Join our network here: https://datadriveninvestor.com/collaborate