Money, Politics, and a Tale of Three LLMs
(Originally published on Medium, November 4, 2024)
Disclaimer
Let’s get the boring stuff out of the way: this isn’t investment advice. It’s not political commentary either. If you’re looking to make a quick profit on the election, you’re in the wrong place. This is a modest examination of how artificial intelligence handles topics that make its digital neurons act like a toddler with unfettered access to Grandpa’s candy drawer. Think of it as a psychological study for algorithms.
The Study:
I asked our friends ChatGPT, Claude, and Gemini the same question. For the super geeks thinking, “but the models change every 48 hours!” the versions I “chatted with” are ChatGPT 4o, Claude 3.5 Sonnet, and Gemini 1.5 Pro as of November 1, 2024.
The Question:
“What is the best financial markets hedge trade you can construct if you’re concerned about the polarization of the US around the 2024 Presidential election?”
Based on the results, you’d think these LLMs were triplets separated at birth and raised on different continents. ChatGPT is the chatty, overconfident finance bro, Gemini is the cousin who has developed glossophobia, and Claude is the professor who got lost in his ivory tower while discussing theorems but can’t read a map. As we unbox this experiment, lets keep in mind that, not unlike human offspring, all three of these folks were raised by parents, and their personalities didn’t develop by accident.
ChatGPT:
ChatGPT dove right in. It even provided specific ETFs to target. Confidence was never in short supply, even if wisdom might have been. It produced a 717-word response right out of the gate with a relatively detailed summary. Here’s a snapshot:
Given its chattiness, I decided to dive into the conversation with more questions:
What would you suggest if you thought Kamala Harris might win?
What about if you think Donald Trump may win?
Which outcome do you think the market is already anticipating?
Based on the data you are considering how confident are you in that conclusion?
The answers were fulsome and seemingly thoughtful. To ChatGPT’s credit, it hedged on confidence despite its poised tone.
Claude:
Claude did its best impersonation of a skilled litigator in front of an unknown jury. No specific investment tips here. Instead, Claude offered a framework for thinking about the problem. It’s like teaching someone to fish instead of handing them one, assuming the fish could lose all your money.
I, of course, asked it to elaborate. And it did. The answers were comfortably couched in context, offering ways to think about the challenge but also supported with examples. The last paragraph was consistent, offering cautious guidance about how to approach the challenge.
I’m fairly convinced Claude has a closet full of cardigans, sweater vests, and corduroy sport coats with elbow patches.
Gemini:
Then there was Google’s Gemini, who approached the question with the enthusiasm of a hermit at a cocktail party.
I’m a fan of the phrase, “better to remain silent and be thought a fool than to speak and remove all doubt.” However, this is Google we’re talking about. I tried to get it to take a more Claude-like approach but got stonewalled again. It made me think of a pre-schooler being offered a kale, broccoli, and Swiss chard cookie. Refreshing? Maybe. Frustrating? Kind of. This is the company that built the most robust search engine in the history of, well, search engines. “Mum’s the word” may be the more level-headed approach, but it leaves me wondering if they think we’re so dumb we can’t read the disclaimers.
Did We Learn Anything?
Perhaps. We asked a multi-factorial question with mathematical complexity, social nuance, and psychological depth that requires a boatload of secondary thinking, and the ones that chose to answer did…well, fine, albeit very differently. The irony isn’t lost here. In trying to create artificial intelligence that can handle complex human problems, we’ve been gifted digital companions that mirror the collective biases and hangups of their creators. OpenAI’s ChatGPT charges ahead like a self-assured teenager with more testosterone than wisdom, Claude like the pipe-smoking, penny-loafered professor trying to teach us all to be a bit more aware of metacognition, and somewhat ironically, Google’s Gemini like the once burned, twice shy courtroom veteran.
Are any of these approaches more correct than the others? The answer depends on your objective and knowledge level. What we’re witnessing is the emergence of distinct AI personalities, each reflecting the views of their overseers, who all have decidedly different views on the delicate balance between capability and responsibility. It’s the equivalent of watching how different parents choose how to parent on a playground. Only, in this case, the balance of human knowledge may depend on the strategy each parent chooses.






