The Martian tool automatically switches between LLMs to reduce costs

Shriyash Upadhyay and Etan Ginsberg, AI researchers from the University of Pennsylvania, believe that many large AI companies are sacrificing basic research in the pursuit of developing competitive, powerful AI models. The duo blames market dynamics: when companies raise large amounts of funds, most often go into efforts to outpace rivals rather than studying fundamentals.

“In our research on LLMs (at UPenn,) we observed this in terms of trends in the AI ​​industry,” Upadhyay and Ginsberg told TechCrunch in an email interview. “The challenge is to make AI research worthwhile.”

Upadhyay and Ginsberg thought that the best way to solve this would be by building their own company – a company whose products benefited from being translated. The company’s mission naturally aligned with pursuing translational research rather than capability research, they hypothesized, leading to stronger research.

That company, Martian, has now come out of hiding with $9 million in funding from investors including NEA, Prosus Ventures, Carya Venture Partners and General Catalyst. The profits were put into product development, conducting research on internal operations of the models and growing the ten Martian employees, said Upadhyay and Ginsberg.

Martian’s first product is the “model router,” a tool that takes a prompt intended for a large language model (LLM) – say GPT-4 – and automatically routes it to ” best” LLM. By default, the router model selects the LLM with the best working hours, skill set (e.g. mathematical problem solving) and cost-to-performance ratio for question prompting.

“The way companies currently use LLM is to choose one LLM for each endpoint where they send all their requests,” Upadhyay and Ginsberg said. “But within a task like creating a website, different models are better suited to a specific request depending on the context determined by the user (which language, which features, how much they are willing to pay, etc.) … By using a team of models in an application, a company can achieve higher performance and lower costs than any LLM can achieve -alone.

There is truth in that. Relying solely on a high-end LLM like GPT-4 can be cost prohibitive for some, if not most, companies. The CEO of, a market intelligence company, recently Revelation it costs the company more than $1 million per year to process nearly 2 million articles per day using high-end OpenAI models.

Not all tasks require a pricier models’ horsepower, but it can be difficult to build a system that is intelligent switch on the fly. That’s where Martian – and its ability to estimate the performance of a model without actually running it – comes in.

“Martian can route the cheaper models to the same requests as the most expensive models, and only route the expensive models when necessary,” they added. “The router model indexes new models as they come out, incorporating them into applications with zero friction or manual work required.”

Now, the Martian router model is not new technology. At least one other startup, Credal, provides an automatic model switching tool. So its growth will depend on Martian’s price competitiveness – and its ability to deliver in high-stakes commercial scenarios.

Upadhyay and Ginsberg claim there have been some uptakes though, including by “multi-billion-dollar” companies.

“Building a truly effective router model is more difficult because it requires developing an understanding of how these models fundamentally work,” they said. “That’s the development we’re pioneering.”

Leave a comment