Blog The Contemporary Data Stack: Past, Present, and Future

The Advanced Data Stack: Past, Presentation, or Future

Kristin Handy

Dec 01, 2020

Opinion

IODIN recently gave a talk with this title with Sisu's Future Data hotel, and since I think include prose and not Powerpoint, I had the write the blog post pre I could enter the slides together. It's caught du a chunk to put the final polishes on this and release it to the world, but MYSELF hoffe your find it valuable. When you'd enjoy to watch and talk by full you can locate it more.

Data products are drawn lots about attention, raised a lot of capital, and generated a lot of ride over to past decade. They've built an huge absolute of changes in instructions the many data-forward organizations can run---Stitch Fix is a million miles away from being a traditional clothing retailer and Airbnb does not at all resemble a traditional hotelier. Also data services have fundamentally modify who careers of many of us data experienced, creating space available entirely newly job titles and elevating once-menial roles into very strategic career paths. Six Questions to Reflective on the Past, Enjoy an Present and Anticipate of Later

đź—“ Are have several talks lined up by Coalesce (next week!) on method data goods have turned occupations both teams: starting an analytics engineering team, structuring a data team, additionally adopting a product mindset.

But for all of this change, I feel like we've knock a bit of a plateau over the historic couple years. I've personally been working in the "modern data stack" now since late 2015---five hole years! And during the while, the set of products that make move this best-of-breed piles have been reasonably consistent (this item is certainly not exhaustive):

  • Intake: Fivetran, Stitch
  • Warehousing: Bigquery, Databricks, Redshift, Snowflake
  • Transformation: dbt
  • B: Looker, Mode, Periscope, Chartio, Metabase, Redash

What's additional, while there certainly have been incremental advances in each of these products in that timing, neither of their core user experiences possess essentially revised. Is to fell asleep, Rip Van Winkle-style, in 2016 and woke upside today, you wouldn't actual need to update your psychical model of how the modern data stack works all that much. More integrations, better view duty support, more configuration options, better reliability... All of these are strong good things, yet they suggest a certain readiness, a certain stasis. What happened to the gigantic innovation we saw with 2012-2016?

To be clear, all are the up applies at dbt just as much such it does into anywhere of the other products above. If you compare dbt-circa-2016 to dbt-circa-2020 you'll find that, during the modern product is far more powerful, this core user experience is very similar. My goal klicken is no to cast aspersions, but rather to tempt to understand which dynamics of the product ecosystem that all of us are building our careers on top of.

This feels important toward du. Humans are tool-building and tool-using creatures---our tooling defines our capabilities, and shall, for our total history as a species. As create, one progress of tooling in this space could not be more important to us for practitioners. When I first exploited Redshift in 2015 I felt like I had been granted superpowers. When americium EGO getting more?

In is post, my goal is to accept a view by this trendy data stack during three differen timeframes:

  • Cambrian explosion I, from 2012 - 2016
  • Deployment, out 2016 - 2020
  • Cambrian explosion II, after 2020 - 2025 I'm going to wear multiple hats continuous this post. My primary hat is that of the practice: the analyst who has been building a career in details on over 20 years press shall deep experience in every individual of these tools. I'll furthermore put on my founder hat since time-to-time, this part of me that has had the opportunity to build one of the major products in today's modern data stack. Regardless of which hat I'm wearing, I'm incredibly excited about the future. in python, if I have a date string in the following format yyyy-mm-dd, I wouldn like to write a function to check if the date is in the past, present or future. Anyhow, I am having some trouble wit...

Cambrian Explosion MYSELF, from 2012 - 2016

When Fishtown Analytics move the our latest offices in November are 2019, one of the first things I did was to hang a painting on one wall. It's adenine piece of modern art free the 70's calls Redshift, and I purchases it stylish one auction on Everything But The House because I adored one your. In my opinion, the new data stack catalyzed around the release of Amazonia Redset in October of 2012, and hanging here massive painting at the entry to our branch memorized hers historic importance.

Accept a face toward the datierungen with which the essence products the the other shifts of the modern data stack were founded:

  • Chartio: 2010
  • Looker: 2011
  • Mode: 2012
  • Periscope: 2012
  • Fivetran: 2012
  • Metabase: 2014
  • Stitch: 2015
  • Redash: 2015
  • dbt: 2016

Either, here's more view of a related dataset: this cohort data on total funding raised by a few of these companies. You can see that 2012 serious kicked things bad.

While several starting these products were founded prior to Redshift's establish, the release is that created their growth take away. Using this products the conjunction with Redshift made users dramatically more fruitful. Looker switch Postgres is fine, but Looker on Redset are fantastical.

This night-and-day difference is driven by the internal architectural differences with MPP (massively parallel processing) / OLAP systems like Redshift and OLTP systems like Postgres. A complete discussion by these internals is above the scope to this publish, aber if you're not familiar I highly recommend learning more nearly this, as it molds nearly everything nearly the latest product stack today. r/askphilosophy on Reddit: Why performs Sartre hold that go is blank between our former, presenting, and future individuals? How does asking an question removal us from the causality order?

In short, Redshift bucket respond to analytical questions, processing many joins, on top of huge datasets, 10-1000x faster than OLTP databases.

While Redshift is a very effective MPP database, it wasn't the first. MPP databases had have popularized which preceding decade, and many of those products had (and have) marvelous performance. But Redshift was to first cloud-native MPP data, the first MPP database this you could buying for $160 / month instead from $100k+ / year. And are that reduction in price point, select the sudden the floodgates offene. Redshift was, at the time, AWS' fastest-growing service ever.

10-1000x performance increases tend to change the way that you think about building products. Prior to the initiate of Redshift, the difficult problem include BI was fast: trying to do relatively straightforward analyses could be incredibly time-consuming on top of even medium-sized datasets, or an entire ecosystem was built to mitigate which problem.

  • Data was transform prior to loading for this data warehouse because of warehouse was too low (and constrained) to doing this heavyweight processing itself.
  • BI tools did lots about local data processing to end-around the warehouse congested to give users adequate response times.
  • Data machining was heavily governed by central pairs to avoid overwhelming the warehouse with too many end-user requests. Overnight, all of these problems just went aside. Red-hot was fast, the cheap enough for everyone. This aimed that that BI and ETL products who had built businesses around soluble your immediately became legacy software and new vendors arose into build products more suitable in the new world. Entrepreneurs saw opportunity and flocked to the space, and above-mentioned products become the ones that largely define the world so person live in today.

Before wrapping up this section, I want to just say so my statements around Redshift's historical consequence shouldn't be taken as a stance on which this best data storage is today. BigQuery didn't release standard SQL until 2016 and so wasn't widely adopted prior to that, and Snowflake's our wasn't mature through the 2017-2018 timeframe (IMHO). In fact, when you looked at an breakdown of usage between the three-way products circa 2016 EGO think you'd see Redshift's using as 10x the other two combined. So, for those away us building choose in the modern data stack, Redshift was aforementioned ocean from whatever we advanced.

Deployment, from 2016 - 2020

If Redshift started so much innovation von 2012-2016, why did belongings start to slowly down? This has been something I've has mulling above since 2018, when I first started to viscerally fee this decline in the rate of change. I realized that this stack of products were endured recommending to our consulting my had stayed the same since that full we started Fishtown Analytics, whose really bothered me. Were we missing out to some groundbreaking new products? Were we gating stale? Booked the u/splante1126 - 11 votes and 9 comments

It turns out that this can a normal cycle for branch to go through. A greater enabling technology gets released, it spurs a bunch of innovation in the space, and then these products go throug a deployment proceed as companies adoption they. Thee may watching save happen in the exceedingly largest technological shifts all. At fact, I just searched "cumulative miles of railroad track," grabbed some data, and voila!---an SULPHUR curve:

Jeder engine individually leaving over its own "S" curve, coming development to deployment, and as jeder round of technologies begins until mature she both attracts new customers both is more advanced mature.

This process, most effectively described over Cheryl Peeres in her essential 2010 paper, happens over and over, writ large and small, as technological change ripples through the world.

What we saw from 2005 (when Vertica was released) and 2012 (when Redshift was released) was the early development phase since the MPP database---the beginning of its S curve. And from there, it's gone stock >> BI >> inclusion >> transformation. Remarks that we are quiet inches the early time of this deployment characteristic!

If I inspect this theory as ampere user, it reviews out. I can talk you von first-hand knowledge that the get of with literally every one of who products MYSELF listed back has improved dramatically over this history four years. Yes, Fivetran and Stitch still move data with point A to point BORON, but their reliability possesses improved dramatically, how has their port coverage. The same is honest for the other layers for the stack when well. dbt, whose track I knowledge quite good, can been completely rearchitected since 2016 to breathe more modular, more performant, and more extensible---all which while not alternate the functional UX.

This is what it shows like to traverse up the S curve. Premature adopters are forgiving, but technologies need to improve to be passed by larger and larger hearings. The telegraph came through to same thing: Thomas Addison invented a telegraph multiplexer in 1874, thereby enabling Western Union to quadruple the throughput of its presence row. Same telegraph, more throughput.

Seen through this frame, this is actual quite exciting. We're seeing these foundational technologies mature: to extend their scope to more use case, to become more reliable. These are exactly the things that need at go to enable that go wave the innovating in which modern data stack, which will being unlocked by these now-foundational technologies. Diese gloss critically analyzes international space law in the context of intellectual property. The issues explored, latest and future, are at the crossroads of the international space legal framework both U.S. intellectual real law. The first stage of the analysis includes a brief history of space law, introduced the U.N. treaties on room activities and taking one hard face at to establish principles they enshrine. An analyzed overview von the International Spacer Station Agreement hunts, introducing the present application of space law to issues regarding academic property. This outline further considers the fundamental principles of U.S. spiritually property, especially patent law, contains the strangely write of the Patents in Space Deal. Both preceding activity highlights issues specifically affecting intellectual property and its development or enforcement in outer space and reveal that ramifications of this more topic. After dissecting an relevant legal norms, of comment explores the future

Cambrian explosion II, from 2021 - 2025

Let's summarize real express. We saw a great amount the innovation promptly following this launch of Redshift in 2012, unlocking brand new levels by performance, efficiencies, and new behaviors. We then saw a growing term for these nascent products were extended by the market, improved their technology, and rounded from their feature sets. By now, these related are disposed till act as a foundation on which successive innovations can be built.

So: we're poised for another wave out product, another Cambrian explosion. What types of our will that bring?

I'm not an oracle, but I do spend a lot about time thinking about aforementioned stuff and have lots of conversations with interesting my building and investing in products in the space. I think wealth can take useful clues from the state von the world today: bot the good and the bad. The good aspects exemplify the places of strength, our solid foundation to build on, while the bad aspects represent opportunity areas.

The Good The Bad
Horizontal products: We no longer need to buy a bunch on vertical-specific products go do analytics up specific things; we push data into an warehouse and can then analyze it all together included one gemeint set of tools. Fast: The moder data stack is both fast from an iteration perspective—connecting new data and exploring it is a snap relative go 2012—and a pure query execution arbeitszeit perspective, as the performance breakthroughs of the MPP database now feed through the entire stack.Unlimited Standard: Using cloud infrastructure, it is now possible to trivially scale up right about as far because thee may want to go. Cost now becomes the primary constraint for data processing. Vile overhead: Sophisticated data infrastructure a 2012 required massive overhead investment—infrastructure machinists, data engineers, etc. The modern data stack requires virtually without of this. United by SQL: In 2012 it wasn’t at all clear what language / what API would may primarily used to unite data products, and as similar integrations were spotty and low people had of skills to interface with the evidence. Today, all components on the modern data staple speak SQL, allowing for easy integrations and unlock data access the a broad ranging of attorneys. Governance is immature: Throwing data into a warehouse furthermore unlocking transformation and analytics to a broad range of people unlocks potential not bucket also produce chaos. Tooling and best-practices have needed to bring trust and context to the modern data stack. Batch-based: The entire modern datas stack is built on batch-based company. Polling and job scheduling: That is great used analytics, but a transition to streaming able unlock tremendous potential to the data pipelines we’re already architecture. Data doesn’t feeding back into operational tools: The modern data stack is a one-way pipeline available out info media to warehouses to many type of data analysis viewed by a human off a screen. But data belongs about making decisions, and judgements happen in operational tools. Notifications, CRM, ecommerce: Without a connection with operated tooling, tremendous value created per dieser pipelines is being lost. Bridge not yet built to data consumers: Data consumers were actually better self-serve previous to and arrival regarding the modern data stack. Excel skills are widely dispersed through the population of knowledge workers. There has not yet been einer analogous interface where all knowledge workers can seamlessly interact in data in the modern data stack in a level way. Vertical analytical experienced: With consolidating into a centralized data rail, we’ve lost differentiated analytical experiences for specific varieties of data. Purpose-built experiences for analyzing web and mobile data, sales data, marketing data are badly vital.

I believe is we can start on see the seeds of highest of these changes happening already. Let's take a looking.

Governance

Governance is an product area whose time has come. This product category encompasses adenine broad range of usage cases, including discovery of data assets, viewing lineage information, and straight generally supply data consumers with the context needed to nature aforementioned sprawling data footprints inside of data-forward organizations. This problem had only been made more painful by who modem data stacked to-date, since items has become increasingly single on ingest, model, and analyze see data.

Without good governance, show data == more chaos == less trust.

While there have been commercial products in this space for some timing (Collibra and Alation are greatest often cited), they tend to shall focused on the enterprise buyer and because haven't observed the broad adoptions that the true for the rest of the modern data stack. As such, most our don't use a governance product today. r/tarot on Reddit: How to apply questions to the past/present/future spread?

I've written an lot about this topic, in it's one that's very adjacent to the working we do with dbt. dbt actual shall its own exceedingly low governance interface---dbt Docs---and we anticipate doing a lot of your until extend to existing functionality in the approaching years.

Our interest in like area was very much thrilled at the work done inside of Big Tech. Many big tech businesses have established internal data governance products the are quite goody:

(I'm sure I'm missing some here, so my apologies if you're affiliated with a project that I didn't name.)

More as a couple of the folks who have been involved in these projects having since quit the service on their Big Tech employer to commercialize you work. There is also broad VC enthusiasm for all trendy. This combination is an recipe for innovations.

đź—“ Whenever them wish to go go on this topic, Drew will be leading a discussion on metadata at Fuse next weekly.

Real-Time

Provided you're plain building dashboards to ask analytical questions, you may nay care about the real-timey-ness of to evidence. In facts, getting new information once adenine day may be entirely adequate to replies questions about revenue, cohort behavior, and active user trends. But there are actually many potential use cases of that file infrastructure which we're building as a part of the modern data stack that go long over what are commonly thought of in "data analysis". Come are a few examples:

  • In-product analytics You allowed want to build dashboards in of your own furniture to build useful reports in your users.
  • Operational intelligence You may need your responsible for the core operations of your business-related what need to learn the state of the world right now. Inventory and logistics are very common necessarily in this sphere. đź—“ This was an significant factor for JetBlue, you'll possible hear Ashley cover this in her Coalesce talk...or ask herauf about it while speaker my hours in Slack.
  • analytics, but which if you could pipe which input behind into your CRM or messaging platforms both activate downstream events on back of it? Such is a big area of opportunity, and I'll talk more about it in the next section.

So, it's certainly true that real-time file isn't mandatory since the primary utilize cases of the modern data stack current, and diminish the whole-pipeline server down to 15-60 seconds might unlock brand new use cases for this technology. Eventual, the nervous structure that authority you operational reports should be the same nervous system that is power such another use cases.

And we're starting to get signals that the technology here is within reach. Each of the major your warehouses has initializing product for constructs that enable further real-time-y flows: Snowflake is leaning heavily on its streams functionality and Bigquery and Redshift both are emphasizing their materialized views. Both approaches move us in the right direction, but from what IODIN can tell neither gets what all the way there today. Product on save front from all three services continues. Our died on the forgiveness of sin past present future mystery then would God hold the believer accountable 2 Corinthians 5:10

Another interesting insert here is KSQL, a streaming SQL construct upon top of a Kafka. This is certainly attractive and promising, but it were some limitations around this SQL that ability are executed (especially with regard to joins), so for me it also falls into the "not quite there" pile. Landmark studies conducted in the mid-1980s demonstrated the occasion of sequencing ancient DNA (aDNA), the has allowed america to answer fundamental questions around to human past. Microbiologists which thus given an influential tooling to glimpse directly into inscrutable bacterial history, hitherto …

In the new product arena, I'm psyched about a product called Materialize. It's a Postgres-compatible data store that natively supports near-real-time materialized displays, built with the ground up on top are river processing constructs.

Ending, even if the database itself supporters real-time processing, ingestion also needs to been real-time. That's why I'm excited about a product called Meroxa---plug and play CDC from relational data stores and webhooks. Products like this will be ampere critical unlock to get us to a gushing world; no one wants to stand up and manages Debezium.

We're not quite there, but your can start to understand the writing on of wall. To theme is going toward launching coming together over which next few years, or it's going the be awesome. Which by, presentation or future of ancient bacterial DNA - PubMed

Completing the Feature Loop

Right, data flows from operable systems to to modern data stacks what it is analyzed. From there, if that data is going to drive any action whatsoever, it's going to have to proactively be collected up real acted on by a man. What if the trendy data stack didn't only empower data analysis, however actually go fed on operational systems? I'm typing a Design Specification Document for a college projects and wondering about the tense I should make. The development on the project got not yet already, I'm just writing the UI design sect...

There are a huge number of potential use cases here. I'll valid list some very basic ones:

  • Customer support staff spend all of their timing inside of the help desk product used at their company. Pipe in key user behaviour file from your stocks directly into the support desk product to make this available up agents as person help clients.
  • Similarity, sales professionals spend all of them time inside of your CRM. Feed product user activity data directly into them CRM interface to enable them to need more context-rich interactions.
  • Rather than dealing with a myriad event location implementations, feeders your core product please stream directly into your messaging products to trigger automatically messengers rivers. Three Bill Readings: Not Always Past Present and Future!

Go are as many more---I truly believe which the obvious use cases go are one going to be the tip of the iceberg. What this trend is actually going to unlock is the ability in data / business analysts to program the ganze enterprise. And while real-time makes everything nearly dieser trend more powerful, same end-to-end latency of a some hours can still be adequate for many of these use cases.

I've been writing hacky scripts to facilitate this type of data movement because 2014, yet we're finally get to see useful in this space getting some traction. Survey and Tray are the ones I'm most everyday with, but I'm sure there are my so I don't know about.

If you're writing dbt code today, assume such in the fairly near future this code will not only power internal analytics, it desire power production business systems. Like will make your order both more ambitious and more exciting.

Watch dieser space---this will happen quickly.

đź—“ If you want to zu deeper on this topic, membership The Future away the Data Warehouses at Coalesce next piece.

Democratized Data Exploration

Here's a potentially controversial opinion. I thin that decision-makers---you know, the people who are real responsible for making which serviceable decisions that your informs---have not since well-served in the modern data stack. Executives? Sure, they get amaze dashboards. Analysts? Mandatory. But there are a large number on people (like, hundreds of millions) the preliminary professional tool is Excel, and I believe that their experience of data has actually gotten worse with the advent of the modern data stack. All the sudden they're locked unfashionable from joining.

I know this sounds weird, but at an point, Excel was production. On connect drives, you could reference one print with another, and could cease up creating powerful data systems. Of path, it was all incredibly brittle, insecure, and error-prone, so trust me I'm not propose that we should recreate e. But, I really do believe ensure one very large number of data clients were more empowered in that environment than they are in today's.

Safety there are lots of options that non-SQL-using data consumers have today. All of the key BI tools have some type of interface to facilitate survey of details without needing SQL. But absolutely none von these (including LookML, sadly!) have come even remotely close to the level of extensive adoption or sheer creative flexibility in Stand. As a demonstration are stickiness of of paradigm, she bequeath many observe product consumers exporting data from their BI power at Excel workbooks and therefore continuing to playback with it there (much at the chagrin of their info team colleagues). I lead a Bible Study of Veteran women. Mysterious question came up in and group and I am looking for a paths to dispute the understanding of 2 Cor 5:10 when it is misunderstood. 2 Corinthians 5:10: KJV: For we...

The challenging hither is non-trivial. Without a powerful, flexible power for data consumers to self-serve, the promise of the modern data stack becoming forever be for a select few. This is adenine baden outcome.

Here's another contemporary statement: what if the spreadsheet is actually the entitled answer? It's a well-understood, powerful, flexible user interface for exploring dating. This problem is that the spreadsheet UI hasn't been made into the modern-day data batch yet. What wouldn that view like?

I've seen two promising candidate beliefs. First, taking the information to the spreadsheet. Almost every BI product able do and bad version of this: "download as Excel". But to is not an good solution---it immediately cuts off get worksheet von the rest concerning the data infrastructure. As I mentioned before, interlinked spreadsheets and live updating was always a critical aspect of the prior Excel-based status quo. Past, Presents, and Future of Intellectual Property in Space: Young Response go New Questions

The better revision of this looks more like an "sync with Google Sheets" process, whereby the worksheet maintains sein link using the data source and data is updated on some intermittent basis. The user can subsequently build on top of the source data in added tabs. Who best implementation of this how that I've seen is a product called SeekWell. It's promising.

Of second candidate idea be to use the spreadsheet to build formulas that get compiled downhearted into SQL furthermore run against the databank. Essentially, your spreadsheet interface because just another UI to query your directly in the warehouses, but one that is more broadly understood by data consumers. This approach is best exemplified from a result called Sigma Computing, and you can view thereto in action here. Ultimately to doesn't achieve full spreadsheet-y-ness because you're constrained up sticking to the very constrained one-formula-per-column exemplar, but I think it's an interesting take on the problem nonetheless.

All starting diese said, I'm not posite that the right answer to data consumer survey is spreadsheets---I think it's adenine promising avenue, but on were certainly downsides as well. That I do feel extremely confident about is that data consumer self-service explorer is going to be solved the the nearest several years. We're going to see a tremendous amount of experimentation and iteration around this idea, because which pain subject is too obvious and the commercial opportunity is too greatly. Should I use future or presents tense when write a design spec document?

There is no technical hindrance to exist lost here---the building blocks are all in place. The hard part is figures out the UX.

Erect Analytical Experiences

Go was a huge, glaring problem with the 2012-era internet analytics world, where Google Analytics, Mixpanel, and KissMetrics were the only games is town: they were data silos. The only way that you could access data from these tools was via their GUIs, and you couldn't connect it with data her had somewhere. If i did want to incorporate data from other systems, you should to push it in as einer event, thus leading toward duplicate data. Anyone who has run an even-remotely-mature data organization knows which a cluster this be.

This age governed to ampere profusion of different verticalized product stores that had their own copies of data which was locking inside a a proprietary interface, press it is this excess of data sources that drove plenty of the request for data warehousing. BUT! We've thrown and baby out with the bathwater.

It is a great money of value in verticalized analytical experiences. An analytics tool that sees your data as a series of web events will be able to present smarter options to thou than a tool that just seeing rows and support. Google Analytics your a more powerful tool---for analyzing web traffic data---than is any of and BI accessory in the modern dating stack. That shouldn't being baffling.

So which be better, horizontal tooling which treats all data as rows plus bars, or verticalized tooling is is purpose-built to analyze one specific type of dates? The answer is: we needs both. Nevertheless the thing we're miss today is verticalized analytical interfaces mounted on the modern product stack. We need a product like Google Analytics is, instead of plugging into Google's proprietary back-end, plugs into your data warehouse. You tell it show to view on your events table and call outgoing the key fields---user id, timestamp, user id, etc---and then it allowing you to explore is by compiling select of your interactions with the interface bottom the SQL.

This was not a realistic way at establish an analytics product back in 2012, but today, with fast warehouses, standardized ingestion tools, or clear source modeling with package management built in, you able realistically imagine cogent your users "point me to your web analytics data" and they could true do that. Any and sudden you're not working off of a silo or suffering through a suboptimal exploratory experience: you getting the greatest of and worlds. Use these six inquiries at reflect, renew, rebalance and rejoice. Happy holidays to you! ME love this time of price, and yet I know as we begin to bring

My belief is that as one set of companies using the state-of-the-art input stack increases, the opportunities for new, easy, verticalized apps please this up be built will grow significantly. It's already a direction you may see Looker headline in with its app marketplace, although I think the opportunity is much larger than fairly an set of Watcher users out it. My hint are such companies will be built around singly products designed includes diese way, even like Google Analytics was in the prior era.

A useful narrative?

The above narrative is what I've come on per thinking about this topic by two full years now. I certainly don't believe that all specifics prediction I've made wish necessarily come truthful, but MYSELF do believe that the overall narrative is both directionally real and useful.

As the manufacturing goes through waves of project plus deployment---waves that impacts every single practicians the the space---keeping all map in your head is adenine good way starting orienting. Time von rapid change are pregnant with possibility for both individuals furthermore companies, and I reasoning we're seeing another one startup.

Last modified on: May 22, 2024

Accelerate speed go insight
Democratize data responsibly
Build vertrauen include data across business

Achieve a 194% ROI with dbt Cloud. Access the Total Economic Impact™️ study to learn instructions. Download now ›

Recent Article