![]() |
| Mockup of a current personal Claude Project |
I should probably rename this series "ChatGPT or Whatever," because I've gone well beyond text-in/text-out into MCP servers and agentic coding. So, to answer my original question again: ChatGPT as a BI platform helper? Hell, yes.
How I got here (gradually)
- Claude started creating artifacts for me—things I could publish and share with others, built in my browser. Can't find a to-do app that matches your workflow? No problem, just create your own.
- Claude also makes it easy to integrate with remote MCP servers. The first one I used was the Microsoft Learn MCP Server, which meant I could ask questions and get answers based on the current official documentation (plus the capabilities of the LLM) instead of only whatever the model remembered from training.
- As I saw others creating their own "products" at my company, I eventually moved to Claude Code and GitHub Copilot in VS Code to create programs that help us do our jobs better and faster. I also created Claude Skills to assist with specific tasks. (I'm deliberately not giving specific work examples here—we're seeing more competition for some of this.)
Of course, it's not all roses. There was that time the Power BI Modeling MCP "fixed" all of the invalid DAX measures in my model by replacing the expressions with BLANK().
The paradigm shift (not a cliche I use lightly)
Once you start down this path, everything looks like a potential way to leverage automation—or at least an LLM.
Can I create a Claude Project to help me analyze, plan, and implement ways to reduce my home insurance premiums or do some financial planning? Can I give my web site a facelift? Can I create a bookmarklet to quickly document what's on sale at Whole Foods this week? Can I create software to plan and track my entire year (see mockup at top of post)? Yes, if I feel like it.
I hope this isn't atrophying my brain, but I also sometimes outsource low-value decisions when I'm feeling burned out and there's no clear criteria I can use to evaluate options. Think: what to make for dinners next week.
Who gets value from this, as of now
Paraphrasing from somewhere: Knowledge doesn't ensure success, or we'd all be billionaires with 6-pack abs. The same is true of LLMs. Most of us have the same or similar access. Anecdotally, the people getting the most out of this are the ones who were already good problem solvers, creative, and curious.
LLMs can't always do everything, but you can find ways to get them to help if you're creative enough with thinking of use cases and efficiently providing context. On that front, understanding metadata still matters—a lot. There's still plenty you can do with existing software by leveraging metadata, DAX, and C# to speed up your work. But you also need to understand metadata if you want to be effective at identifying what content you can programmatically update for bigger use cases.
Updates to previous weakness
In my last post, I mentioned a couple of areas where generative AI had been less helpful to me.
- DAX: Claude Projects and Skills have improved my effectiveness getting help with DAX, particularly if you provide a BIM or a description of the data model. The Power BI Modeling MCP Server helps here, too.
- Governance and admin in Power BI/Fabric: This is helped significantly by the Microsoft Learn MCP Server. Instead of the model confidently making things up about licensing, it can actually look them up in the 40 documentation pages where Microsoft deigned to scatter the various licensing limitations and nuances.
