Insights
Insights
SVP, Chief Clinical Officer
Google’s AI Builders Forum took place a couple weeks ago. They brought together industry leaders from startup land, companies building foundational models, larger organizations, and the developers of some of Google’s most popular internal tools. And there was a ton of interesting discussion around AI, how organizations are leveraging it, and what the implications might be for organizations large and small.
Here are eight takeaways we thought were most interesting, and why they might matter for you.
Lovable hit $100M ARR in just eight months with a team of only 70 people (35 of them engineers.) That’s a massive amount of growth in a limited time frame, with a tiny fraction of the people it used to take to achieve similar targets.
How did they achieve this? They leaned heavily on Google’s AI and cloud-native infrastructure:
They also called out Google’s reliability. “AI itself is a bit random”, and so if the underlying infrastructure is also unstable, it could create major problems for users. They suggested google cloud gave them confidence in terms of uptime and consistency.
Our Take:
The same benefits are available to large organizations as well. You can now quickly test, quickly scale, and have the same benefits in terms of reliability that companies like Lovable have. But it requires that you rethink your relationship between team size, infrastructure, etc.
At the Forum, Fireworks noted they shipped 15 state-of-the-art open-weight models in a single month. That’s not a typo.
They suggested that this is not exclusive to them. What used to be quarterly or even annual release cycles at the infrastructure level are now happening weekly. They called out that startups are adapting by organizing work into 6-12 week springs. They run it in production to gather user feedback, and swap in new models or features as they become available.
Our Take:
This is both a challenge and an opportunity. Many organizations are optimized 12–18 month release cycles. And there were good reasons for that at one point. But it’s becoming less viable as a business strategy. The pace of things requires that you become more agile.
Enterprises that can adapt (adopting more modular architectures, leveraging multi-model strategies, etc.) will be able to start to move with the same velocity advantages that currently startups enjoy.
But this requires building an agile data and AI foundation that can plug in new models at will. It requires having a governance and vendor strategy behind that so you avoid vendor lock-in. And it requires that you become more comfortable with a cadence of continuous production expirimentation.
One of the most provocative predictions at the Forum was that many long-tail SaaS products will be displaced over the next few years. The belief is that enterprises can now stand up custom, AI-powered flows internally using low-code or no-code platforms.
This represents a pretty seismic shift. Historically companies paid for SaaS because it was faster than building in-house. But it’s increasingly becoming true that even non-technical staff can create functional web apps or workflows quickly.
Many SaaS platforms become bloated over time as they layer on functionality to address edge cases for their entire range of client. You don’t need all that. In most cases you need 5-10% of the stuff these platforms give you. Now you’ll be able to design workflows that fit your business.
Our Take:
It seems likely that SaaS sprawl will shrink. As companies (and their CIOs) become more comfortable giving their teams the reins, you’ll see big parts of your current tech stack replaced by lighter-weight, custom-built workflows.
To make a transition like this though will create new challenges. Security, compliance, data silos, etc. all present new challenges. Corralling all these little micro-apps across the business becomes a new and different type of headache.
Creating a process for how to make build and buy decisions in this new world will become important. Creating integration layers (MCP, orchestration, etc.) will become important. Aligning on processes for enabling these modular, composable and governed processes will become important.
At the Forum, Rapid’s team was blunt: the only way to tune agents is to put them in front of real users and see what happens.
Offline evaluation will still matter. Benchmarks, test cases, sandbox environments and the like will tell you if your code runs. But they won’t tell you whether a workflow is natural, or useful, or if the person trusts the output. That only can happen when the system is live.
This matters, because agents are probabilistic. You can’t guarantee outputs will be repeatable. Often something that looks good in testing utterly fails in production. If it doesn’t work, people won’t use it, and your work is wasted.
Our take:
The design, test, launch, model starts to break in an AI-saturated world. Static pilots don’t cut it. Yes, you need safety and responsibly guardrails - things like human-in-the-loop, stage gates for certain approvals, monitoring drift, etc. But you need to get the stuff in the hands of your team faster. And you have to assume it’s not launch and maintain. You will have much more iteration and fine tuning than you used to have.
A clear theme from the forum reinforced what you’ve likely already heard elsewhere. Foundational models will likely continue to become commodities. Tools everyone has access to, that are trained on the same stuff. The real differentiator won’t be the model, it will be the data you control.
Clean, governed, and continuously updated data pipelines allow for faster retraining and better personalization over time. The faster you can capture and act on real-world usage, the better your tools become. And the more data you can provide that only you have access to, the more useful your tools will be.
Our take:
We’ve all been getting used to the new world of AI over the last few years. But we’re now at a place where it’s clear the question isn’t which model to bet on. It’s more like, “how do we mobilize our data so any model works better than our competitors?”
You are almost certainly sitting on a vast amount of operations, customer, or transaction data. That stuff is being untapped. If you can activate it you can do some pretty special things.
The trick is getting it in a place where it’s usable. You need a solid data foundations strategy.
One of the more sobering notes at the Forum came from Fireworks: they warned of a 2026 capacity cliff - a world where chips, GPUs, data center space, and power supply will limit how much compute the industry can bring online.
Historically the cloud has been infinitely elastic. It could absorb whatever workload you through at it. But with compute scarcity comes rising costs, allocation tradeoffs, etc.
Our take:
This will matter for a bunch of reasons. Training and inference are incredibly resource intensive. Ditto reasoning models and multi-agent systems. An increasingly important skill will become how to squeeze as much value out of scarce compute.
Cost discipline will become a strategic tool. It’s likely compute will become one of the biggest line items in your tech budget. You’ll have to learn how to prioritize which AI workflows are mission critical, and which can run at lower fidelity or intermittently. You’ll also have to get smart about which models to use when.
One of the more interesting tensions at the Forum was Google’s dual positioning. On the one hand, they demonstrated the strength of hteir native stack. But on the other, they recognize the reality of a multi-model world.
They talked about Vertex’s AI model garden, which provides access to 200+ models including those by Anthropic, DeepSeek and other open weight options.
They are trying to be both the strongest proprietary platform, AND the most open.
Our take:
We think this is smart on their part. We routinely experiment with different models, and have found most excel in certain areas and struggle in others.
By giving you the choice, you can build fast using Google’s infrastructure but hedge your bets and lean on other models when relevant.
The live demo of Gemini CLI was one of the most compelling moments. It’s an open-source terminal assistant, and we found this to be a great example of what an AI-ready development environment might look like. A few highlights:
Our take:
AI coding assistants already help engineers move faster. But Gemini CLI pointed to something bigger. It showed an example of a fully orchestrated workflow. They talked about how this could theoretically turn someone from a 10x engineer to a 100x engineer.
It’s not hard to imagine this becoming more common. And it’s also not hard to imagine similar types of orchestration layers for non-engineers.
Again, as these become more powerful it’s going to be critical for ops and governance to keep up. You can sit on the sidelines and not do anything, but we think that’s short-sided. You want to be able to give your team access to this kind of power, while having the right guardrails in place to make sure you don’t compromise security or compliance.
The through-line across Google’s AI Builders Forum wasn’t just “AI is moving fast.” It was that operating models are being rewritten.
We’re entering a world where you no longer need armies of engineers to scale. Where you can compress product cycles into weeks. Where data and rapid iteration become material advantages. And where infrastructure choices become a major strategic decision.
Manifold can help your organization adapt. By building the data foundations, designing modular workflows, and guiding your leaders on how to. think about these decisions. End state, you’ll learn faster, move smarter, and create a more durable advantage.
Partner with Us
Making better decisions leads to measurably better outcomes. With a solid data and AI foundation, businesses can innovate, scale, and realize limitless opportunities for growth and efficiency.
We’ve built our Data & AI capabilities to help empower your organization with robust strategies, cutting-edge platforms, and self-service tools that put the power of data directly in your hands.
Self-Service Data Foundation
Empower your teams with scalable, real-time analytics and self-service data management.
Data to AI
Deliver actionable AI insights with a streamlined lifecycle from data to deployment.
AI Powered Engagement
Automate interactions and optimize processes with real-time analytics and AI enabled experiences.
Advanced Analytics & AI
Provide predictive insights and enhanced experiences with AI, NLP, and generative models.
MLOps & DataOps
Provide predictive insights and enhanced experiences with AI, NLP, and generative models.
Healthcare
Data-Driven Development of a Patient Engagement Application
We partnered with a healthcare provider to build a scalable patient engagement app with real-time insights and secure document management. Leveraging advanced data analytics, the platform ensured continuous improvement in patient care and operations.
Professional Services
Navigating Trust in Emerging Technologies
A multinational firm analyzed public sentiment on emerging technologies using AI and NLP. The insights revealed privacy concerns and opportunities, helping the client prioritize investments in ethical practices and transparency.
Ready to embrace transformation?
Let’s explore how our expertise and partnerships can accelerate impact for your organization.