Not literally my friends. We do care about icebergs and penguins.
Earlier this year we went through Y Combinator (“YC”). We learned a ton due to a combination of access to a ton of startup-specific knowledge and YC Group Partners that all have built great companies.
So when they provide feedback you better listen. Well… not us.
What are you babbling about? Well, here we go:
After almost two years of heads-down development we built the first complete data infrastructure. We positioned iomete as an all-in-one solution that replaces Snowflake (Lakehouse), Databricks (Jobs and ETL), Fivetran (onboarding 3rd party data), Informatica (Data Governance), and Looker/Tableau (BI) with a single platform.
When we asked one of the YC partners why they had accepted our application (there is a ~2% acceptance rate), the answer was: “two things: (i) a very bold vision, i.e. if it works it will be big, and (ii) a strong founding team”.
So it was a little disconcerting to - half way into the program - get the feedback that “it seemed that we were boiling the ocean” (YC is great at giving direct feedback).
Google result for boiling the ocean = “undertaking an impossible task or making a task unnecessarily difficult. The phrase is used in a variety of settings as a negative comment on how one conducts business. The phrase derives from the literal concept of boiling the ocean, which is an impossible task.”
Y Combinator typically recommends an iterative product development approach where one develops a Minimally Viable Product as soon as possible, collects user feedback and iterates 10x, 100x, 1000x from there.
We had spent almost two years on development. We had good reasons for this. It is the function of the nature of the platform. One can not offer the benefits of a complete data infrastructure until the data infrastructure is, well…: complete.
Before starting iomete we were in-house engineers. We observed a recurring problem: Building data infrastructure is complex and requires significant engineering effort. While there are many vendor solutions available that address a part of the data infrastructure (e.g. lakehouse, data governance, ETL, BI etc), engineering teams are required to integrate these multi-vendor solutions in order to create a complete and effective data infrastructure. This is time consuming and costly.
We saw at Uber and Google what a complete and effective data infrastructure looks like and the idea was born to build a complete platform and offer this as a fully-managed service to organizations with smaller engineering teams and data budgets.
While we believe in our long-term vision of delivering one all-in-one platform we learned a few things In the last two months:
- Our platform is ideal for large organizations (e.g. Fortune 500) but at this point in our journey we lack the credibility to sell such a bold vision to large organizations (they are risk averse and rather overpay for an established incumbent).
- We found that Google search traffic is plenty for the standalone solutions that make up a data infrastructure such as data warehouse, lakehouse, ETL, data ingestion, data catalog, business intelligence, data science and AI/ML tools etc, but is almost non-existent for an all-in-one solution.
We chewed on this for a while and recently took the decision to narrow our go-to-market scope. We did not make any meaningful changes to our platform or to our product roadmap. We will just change our positioning to make sure that potential customers find what they are looking for from a product perspective. We’ll focus on the powerful combination of lakehouse and built-in data catalog.
We find that once customers are onboarded, they're pleased to learn that our platform is comprehensive and that there is no need to patch together multi-vendor solutions.
We’ll keep you posted on our re-positioning or “light pivot”, if you will.
Lots of good stuff coming over the next few weeks!
And the ocean? We still want to boil it, but it might take a few years longer. The icebergs and penguins will be happy to hear that.