– Enterprises are hesitant to adopt AI solutions due to the difficulty in balancing the cost of governance with the behaviors of large language models (LLM)
– Challenges in specifying what a harmful answer is for LLMs
– IBM is looking to develop AI that developers can trust
– IBM aims to use the law, corporate standards, and internal governance to control LLMs
– Contextual documents can be used to fine-tune LLMs and detect harmful content
– IBM develops LLMs on trustworthy data and implements detection mechanisms for biases
– The proposed EU AI Act will link AI governance with user intentions
– Usage is a fundamental part of IBM’s model of governance.
Enterprises are hesitant to adopt AI solutions due to the difficulty in balancing the cost of governance with the behaviours of large language models (LLM), such as hallucinations, data privacy violations, and the potential for the models to output harmful content.
One of the most difficult challenges facing the adoption of LLM is in specifying to the model what a harmful answer is, but IBM believes it can help improve the situation for firms everywhere.
Speaking at an event in Zurich, Elizabeth Daly, STSM, Research Manager, Interactive AI Group of IBM Research Europe, highlighted that the company is looking to develop AI that developers can trust, noting, “It’s easy to measure and quantify clicks, it’s not so easy to measure and quantify what is harmful content.”
Detect, Control, Audit
Generic governance policies are not enough to control LLMs, therefore IBM is looking to develop LLMS to use the law, corporate standards and the internal governance of each individual enterprise as a control mechanism – allowing governance to go beyond corporate standards and incorporate the individual ethics and social norms of the country, region or industry it is used in.
These documents can provide context to a LLM, and can be used to ‘reward’ an LLM for remaining relevant to its current task. This allows an innovative level of fine tuning in determining when AI is outputting harmful content that may violate the social norms of a region, and can even allow an AI to detect if it’s own outputs could be identified as harmful.
Moreover, IBM has been meticulous in developing its LLMs on data that is trustworthy, and detects, controls and audits for potential biases at each level, and has implemented detection mechanisms at each stage of the pipeline. This is in stark contrast to off-the-shelf foundation models which are typically trained on biassed data and even if this data is later removed, the biases can still resurface.
The proposed EU AI Act will link the governance of AI with the intentions of its users, and IBM states that usage is a fundamental part of how it will govern its model, as some users may use it’s AI for summarization tasks, and others may use it for classification tasks. Daly states that usage is therefore a “first class citizen” in IBM’s model of governance.
More from TechRadar Pro
AI Eclipse TLDR:
Enterprises are hesitant to adopt AI solutions due to concerns about the behavior of large language models (LLMs), such as generating harmful content and violating data privacy. IBM aims to address these concerns by developing AI that developers can trust. One of the challenges is specifying what constitutes harmful content to the model. IBM proposes using the law, corporate standards, and internal governance as control mechanisms for LLMs, allowing governance to incorporate the ethics and social norms of different regions and industries. This approach enables fine-tuning and detection of harmful content. IBM also emphasizes the importance of developing LLMs on trustworthy data and implementing detection mechanisms to address biases. The proposed EU AI Act will link AI governance with user intentions, and IBM considers usage as a fundamental aspect of its governance model. Overall, IBM aims to improve trust and governance in AI adoption for enterprises.