Giskard’s open-source framework evaluates AI models before they’re pushed into production

Key Takeaways:

– Giskard is a French startup that has developed an open-source testing framework for large language models.
– The framework helps developers identify risks of biases, security vulnerabilities, and harmful or toxic content generated by language models.
– With upcoming AI regulations, companies will need to comply with rules and mitigate risks or face fines.
– Giskard is one of the first developer tools that focuses on testing language models in a more efficient manner.
– The company has released an open-source Python library that integrates with popular ML tools and platforms.
– Giskard helps generate a test suite covering various issues such as performance, misinformation, biases, and harmful content.
– Tests can be integrated into the CI/CD pipeline for regular execution and developers receive scan reports if any issues are found.
– Giskard customizes tests based on the specific use case of the model, using access to relevant databases and knowledge repositories.
– The company also offers an AI quality hub for debugging and comparing language models, with plans to include regulatory features in the future.
– Giskard’s third product is a real-time monitoring tool called LLMon, which evaluates language model answers for common issues before delivering responses to users.
– Giskard currently works with companies using OpenAI’s APIs and language models, but is working on integrations with other platforms.
– The AI Act’s applicability to foundational models is still unclear, but Giskard is well-positioned to identify potential misuses of language models.
– Giskard plans to expand its team to become a leading provider of language model testing and regulation compliance solutions.

TechCrunch:

Giskard is a French startup working on an open-source testing framework for large language models. It can alert developers of risks of biases, security holes and a model’s ability to generate harmful or toxic content.

While there’s a lot of hype around AI models, ML testing systems will also quickly become a hot topic as regulation is about to be enforced in the EU with the AI Act, and in other countries. Companies that develop AI models will have to prove that they comply with a set of rules and mitigate risks so that they don’t have to pay hefty fines.

Giskard is an AI startup that embraces regulation and one of the first examples of a developer tool that specifically focuses on testing in a more efficient manner.

“I worked at Dataiku before, particularly on NLP model integration. And I could see that, when I was in charge of testing, there were both things that didn’t work well when you wanted to apply them to practical cases, and it was very difficult to compare the performance of suppliers between each other,” Giskard co-founder and CEO Alex Combessie told me.

There are three components behind Giskard’s testing framework. First, the company has released an open-source Python library that can be integrated in an LLM project — and more specifically retrieval-augmented generation (RAG) projects. It is quite popular on GitHub already and it is compatible with other tools in the ML ecosystems, such as Hugging Face, MLFlow, Weights & Biases, PyTorch, Tensorflow and Langchain.

After the initial setup, Giskard helps you generate a test suite that will be regularly used on your model. Those tests cover a wide range of issues, such as performance, hallucinations, misinformation, non-factual output, biases, data leakage, harmful content generation and prompt injections.

“And there are several aspects: you’ll have the performance aspect, which will be the first thing on a data scientist’s mind. But more and more, you have the ethical aspect, both from a brand image point of view and now from a regulatory point of view,” Combessie said.

Developers can then integrate the tests in the continuous integration and continuous delivery (CI/CD) pipeline so that tests are run every time there’s a new iteration on the code base. If there’s something wrong, developers receive a scan report in their GitHub repository, for instance.

Tests are customized based on the end use case of the model. Companies working on RAG can give access to vector databases and knowledge repositories to Giskard so that the test suite is as relevant as possible. For instance, if you’re building a chatbot that can give you information on climate change based on the most recent report from the IPCC and using a LLM from OpenAI, Giskard tests will check whether the model can generate misinformation about climate change, contradicts itself, etc.

Image Credits: Giskard

Giskard’s second product is an AI quality hub that helps you debug a large language model and compare it to other models. This quality hub is part of Giskard’s premium offering. In the future, the startup hopes it will be able to generate documentation that proves that a model is complying with regulation.

“We’re starting to sell the AI Quality Hub to companies like the Banque de France and L’Oréal — to help them debug and find the causes of errors. In the future, this is where we’re going to put all the regulatory features,” Combessie said.

The company’s third product is called LLMon. It’s a real-time monitoring tool that can evaluate LLM answers for the most common issues (toxicity, hallucination, fact checking…) before the response is sent back to the user.

It currently works with companies that use OpenAI’s APIs and LLMs as their foundational model, but the company is working on integrations with Hugging Face, Anthropic, etc.

Regulating use cases

There are several ways to regulate AI models. Based on conversations with people in the AI ecosystem, it’s still unclear whether the AI Act will apply to foundational models from OpenAI, Anthropic, Mistral and others, or only on applied use cases.

In the latter case, Giskard seems particularly well positioned to alert developers on potential misuses of LLMs enriched with external data (or, as AI researchers call it, retrieval-augmented generation, RAG).

There are currently 20 people working for Giskard. “We see a very clear market fit with customers on LLMs, so we’re going to roughly double the size of the team to be the best LLM antivirus on the market,” Combessie said.

Source link

AI Eclipse TLDR:

Giskard is a French startup that has developed an open-source testing framework for large language models (LLMs). The framework is designed to help developers identify biases, security vulnerabilities, and the potential for generating harmful or toxic content in their LLMs. With the impending enforcement of regulations such as the AI Act in the EU, companies developing AI models will need to demonstrate compliance and mitigate risks to avoid significant fines. Giskard aims to address this need by providing a developer tool focused on efficient testing.

The testing framework consists of three components. First, Giskard offers an open-source Python library that can be integrated into LLM projects, specifically retrieval-augmented generation (RAG) projects. The library is compatible with popular ML tools and platforms such as Hugging Face, MLFlow, Weights & Biases, PyTorch, Tensorflow, and Langchain. Once integrated, Giskard helps generate a test suite that covers various issues, including performance, misinformation, biases, data leakage, and harmful content generation.

Developers can incorporate these tests into their continuous integration and continuous delivery (CI/CD) pipelines to run them automatically with each code iteration. If any issues are detected, developers receive a scan report in their GitHub repository. The tests can be customized based on the specific use case of the LLM, and Giskard can access vector databases and knowledge repositories to ensure the relevance of the test suite.

In addition to the testing framework, Giskard offers two other products. The first is an AI quality hub, which allows developers to debug their LLMs and compare them to other models. The second product, called LLMon, is a real-time monitoring tool that evaluates LLM answers for common issues such as toxicity, hallucination, and fact-checking.

Giskard aims to embrace regulation and assist companies in complying with regulatory requirements. The startup is already selling its AI Quality Hub to organizations like Banque de France and L’Oréal to help them identify and address errors. Giskard plans to expand its offerings to include features specifically focused on regulatory compliance.

Currently, Giskard works with companies that use OpenAI’s APIs and LLMs as foundational models, but the company is also working on integrations with other platforms like Hugging Face and Anthropic.

Overall, Giskard provides a comprehensive testing framework and related tools to assist developers in ensuring the performance, ethical compliance, and regulatory adherence of their LLMs.