Let’s debrief : data & AI | Issue January 2026

We like to stay up to date.

With news, product launches, and everything happening in data and AI, we often share updates in our Slack channels and try out new things as soon as they become available in our region.

Instead of keeping those updates tucked away in internal messages, we realised it could save people time to have one place where they can find that information, along with a few grounded thoughts from people working in the field.

There is always something new worth paying attention to.

‘Let’s debrief’ is exactly that, and now it’s yours to read.

What stood out this month?

Several changes focus on parts of the data stack: reporting, dashboard performance, and the first steps of using AI in everyday workflows. In each area, the tools are becoming more explicit about how they are expected to be used.

Choices that were easy to postpone before are now harder to ignore, including how reports are maintained, which dashboards are expected to stay fast, and what needs to be in place before AI features work reliably.

Power BI: Enhanced Report Format (PBIR)

Power BI is transitioning to the Enhanced Report Format (PBIR) as the default for new reports starting this month (January 2026). Existing reports will convert automatically once they are edited and saved, with Power BI Desktop following later this year.

PBIR introduces a more structured file format, with support for version control and deployment workflows. The reporting starts to resemble software that evolves and is increasingly treated as something that needs care, ownership, and maintenance.

Shared reports continue to work as they always have. The moment you open one to make a change, the structure underneath comes into view. Now, habits that once stayed out of sight are suddenly harder to ignore.

So keep a close eye on your opportunities for process refinement!

Snowflake: Interactive tables and Interactive warehouses

Snowflake introduced Interactive Tables alongside Interactive Warehouses, which run continuously and only query tables that are explicitly linked to them. Once set up, dashboards stay fast, even with many people using them at the same time.

That setup works well for dashboards that are checked throughout the day. It becomes harder to justify views that are used once a week or for ad-hoc analysis that could tolerate a short wait. In practice, those distinctions are rarely revisited. Dashboards get placed on the same setup because they already exist there, not because anyone made a fresh choice.

The result is basically fast dashboards everywhere, and the infrastructure running all the time, whether it is still needed or not.

Databricks: Agent Bricks and AI model updates

Databricks continues to expand access to AI functionality with the general availability of Agent Bricks: Knowledge Assistant and support for OpenAI GPT-5.1 Codex models via Mosaic AI Model Serving.

These updates make it easier to deploy AI assistants on top of your internal documents and to support code-related workflows with specialised models. The technical barrier to experimentation is noticeably lower than it was even a year ago.

At the same time, these capabilities rely heavily on how your information is organised. Where documentation is fragmented or ownership is unclear, AI assistants tend to reproduce that uncertainty at scale. Therefore, the models don’t always perform in the way you hope.

Around the same time as these updates, we at i-spark were working on deploying an analytical AI assistant in production on Databricks. And we quickly ran into limitations because of structure, ownership, and orchestration.

Rebuilding the system as a multi-agent setup changed what was possible, but it also made it very clear that most of the challenges were not about AI itself. They were about engineering discipline, domain knowledge, and operational choices that the tool assumes are already in place.

We’ve written a separate, in-depth piece about what that journey looked like in practice, including what broke, what surprised us, and what actually made it at the end.

Building a multi-agent AI system: what going to production really takes

Klipfolio: Multi-factor authentication controls

Klipfolio has been part of our work for a long time.

Nearly 10 years now.

Long enough to see how expectations around analytics have shifted to reliability, security, and shared responsibility.

As a small update, this month, Klipfolio introduced the option to enforce multi-factor authentication for all email-based logins and to remember multiple trusted devices per user.

On the surface, this is a modest update.

Actually, it reflects the rising expectations around analytics security, particularly in environments where dashboards are shared externally or accessed across different organisational boundaries (for example, between different departments requiring a deeper understanding of governance).

A perspective worth sitting with

Alongside the tool updates happening across data and AI, as mentioned, the topics of #responsibility, #governance, and ethical use of data are becoming more and more pressing.

Opinions start to pop out, and more and more LinkedIn posts are written about it. We can take a guess that you’ve already seen more than one on your feed so far.

A few months ago, we reflected on what it takes to build data and AI solutions in environments where mistakes have real consequences for people. Our perspective comes from a recent project where slowing down and making careful choices were a bigger priority than speed.

The full reflection goes deeper into why ethics cannot be treated as an afterthought in data and AI work, and what it means to embed responsibility into both technology and organisational processes.

→ Why we believe every data and AI solution needs an ethical foundation

And at the end, a question for you

Ending with a question and leaving space for reflection fits how we deal with new updates and challenges within the i-spark team. We tend to ask why first, and when needed, turn to the rubber duck or a colleague to talk things through.

So grab your rubber duck and have that conversation. An inner monologue, if you like.

Or take a colleague for a coffee and discuss together:

What feels like a tool issue today, but is really about how things are set up?

Content

Is your data ready for what’s next?
Flexible data solutions that grow with you.

One-on-one meeting with dashboard design and the eagerness to create

(Personal blog) When you have an empty white page staring at you, you realise what intimidation feels like. Where do I start? What do I...

Let’s debrief : data & AI | Issue January 2026

We like to stay up to date. With news, product launches, and everything happening in data and AI, we often share updates in our Slack...

The causes of an overloaded data team and distrust in data

A few months ago, we sat down with 5 data professionals to explore the challenges and ambitions they encounter across different industries and data types....