Weav exits stealth with plug-and-play AI copilots for enterprises

Weav exits stealth with plug-and-play AI copilots for enterprises

[ad_1]

VentureBeat presents: AI Unleashed – An exclusive executive event for enterprise data leaders. Network and learn with industry peers. Learn More


California-based Weav, a startup working to transform how companies build and use generative AI in their workflows, today came out stealth with the launch of Enterprise AI Copilots — a suite of low-code, plug-and-play tools that can integrate different generative AI capabilities into existing systems and business processes.

The launch follows Weav’s seed funding round from Sierra Ventures. It aims to save enterprise teams from all the hassle of building and integrating AI into their systems, right from building and training a model to deploying and monitoring it.

“Business users should be able to initiate a use case and bring in the right data to activate AI at the right places and see results,” Weav CEO and co-founder Peeyush Rai told VentureBeat. “The key (here) is to build the right level of abstraction when designing the platform, which is what we have tried to do with our copilot approach.”

A plug-and-play offering that cuts down the time and effort needed to integrate AI could be a game changer for teams looking to take advantage of the technology in their workflows, especially small and medium-sized ones (SMBs) that are often resource and staff constrained.

Event

AI Unleashed

An exclusive invite-only evening of insights and networking, designed for senior enterprise executives overseeing data stacks and strategies.

 

Learn More

How do Weav Enterprise AI Copilots work and help?

With its copilots, Weav provides enterprises with three key things: ready-to-use generative AI capabilities, connectors to pull data from commonly used enterprise tools and an API that lets developers incorporate the capabilities into various enterprise workflows and applications. 

Everything needed to keep the capabilities running, or the infrastructure stack, comes pre-integrated with the co-pilots, including integrations, prompt management, foundation models like GPT4 and LLama 2, vector databases and security and monitoring.

Currently, the company offers copilots for three key AI-driven functions: Document, Conversation and Search.

  1. The Document copilot ingests unstructured data like documents, images, spreadsheets and JSONs, prepares that information and extracts key entities and values. This allows users to use natural language to search their docs, summarize them or define criteria to assess compliance.
  2. The Conversation copilot goes a step ahead by allowing users to “converse” with their data in natural language. It understands users’ intent and performs the appropriate actions to get the job done.
  3. Finally, the Search copilot allows contextual search across both unstructured and structured data sources using natural language and then translates the search into the appropriate native queries based on which data sources or repositories the information is found in. 

:When data is processed or a user initiates an action, the Copilots orchestrate multiple processes in the back-end, including applying guardrails to protect users and data, querying the embeddings in the vector databases, searching knowledge bases, or running a query on the database, and then composing the results to pass to the large language model (LLM) to generate a natural language response,” Rai noted. ” We are model agnostic. We have our own smaller models for specific tasks, and we can use any 3rd party LLM.”

In most applications, he said, the copilots work together to deliver a seamless experience to users – as they extract value from their unstructured/structured data.

On the model side, the company currently offers support for OpenAI’s GPT-4, GPT-3.5 and Llama 2 out of the box, with on-demand integrations for Anthropic’s Claude and Cohere’s various models.

Promising early results with adoption by big players

Since the power of large language models is known to almost every enterprise, it’s not hard to imagine how enterprises could be putting Weav’s copilots into use.

The company said its plug-and-play technology is being piloted by some of the largest companies in the world, including a multinational management consulting firm operating in over 40 countries, an F100 pharmaceutical conglomerate with globally distributed teams and one of the fastest-growing e-commerce platforms.

While the companies are still in initial stages of implementation and use, Rai noted that early results show that the copilots have achieved result precision ranging from 87% to 95%, and productivity gains or cost reductions up to 75%.

Plan to stand out

After the seed round in November 2022, Weav’s focus was on getting the platform ready for enterprise scale. Now, with the official launch of the copilots, the company is moving to build up its go-to-market and sales engines to rope in more customers.

Beyond this, Weav also plans to invest resources into expanding the set of models supported on the platform. It will develop some core algorithms as well as its multi-modal foundation model, enabling enterprises to do more with their unstructured data.

As the company moves ahead with its product, it recognizes that this will indeed turn out be a competitive space. Dataiku and Databricks are already helping enterprises with gen AI deployment and Rai expects that more companies will soon be jumping on the bandwagon. 

“We see four developing trends in the ‘competitive’ landscape, broadly. First, we anticipate that big tech companies like Microsoft, Google and Amazon will sell Generative AI tooling and infrastructure into their existing accounts. Then, there are incumbent software companies that were using previous-generation technologies to build chatbots or narrow NLP models and new startups. Finally, we also anticipate internal IT organizations who may want to attempt to build it by themselves,” the CEO said.

In this race, he said, the winning ones will be providing real business value to enterprises with the fastest time-to-value and the lowest cost of ownership (TCO) – which is exactly what Weav currently targets.

“Our promise to customers is to show initial value in 2-4 weeks and production deployments in 4-6 weeks. The speed to value is very important. These combinations of factors would differentiate us,” he added.

According to estimates from McKinsey, with generative AI’s implementation, retail and consumer packaged goods companies alone could see an additional $400 billion to $660 billion in operating profits annually. Across sectors, it has the potential to generate $2.6 trillion to $4.4 trillion in global corporate profits. 

VentureBeat’s mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings.

[ad_2]

Comments

No comments yet. Why don’t you start the discussion?

اترك تعليقاً

لن يتم نشر عنوان بريدك الإلكتروني. الحقول الإلزامية مشار إليها بـ *