Artificial intelligence has a problem. In the last decade, companies have begun to deploy AI widely and have come to rely on a host of different tools and infrastructure. There are tools for data collection, cleaning, and storage. There are tools for model selection and training. There are tools for orchestration, and for performance monitoring and auditing. There are tools that focus on traditional predictive AI models. And there are other tools that focus on generative AI.
There are so many—too many—tools to help companies deploying AI.
But no tools provide a unified approach to ensuring AI models align to the law, regulations, or to company values. What companies need is a unified alignment platform that incorporates all AI risks, and not just some risks. It is not enough to ensure that a model does not violate privacy restrictions, for example, if that model is overtly biased. It’s not enough to focus solely on traditional predictive AI or solely on generative AI systems. Companies use both and must address the liabilities of both.
But at the present moment, companies only have access to siloed tools that manage some but not all AI risks, forcing them to cobble together a complicated suite of tools if they are serious about ensuring their AI does not cause harm, violate laws, or tarnish their reputation.
When we say “alignment,” we aren’t just referencing a model’s performance objectives and we are not speculating about artificial general intelligence or its potential risks or abstract society-level values alignment. To us, the AI alignment problem is much more specific.
At a high level, model objectives generally have two parts: do this, and then don’t do that. So there are positive directives, such as be helpful, and then there are prohibitions, such as don’t be harmful. Positive directives generally align to performance objectives, which receive most of the attention from data scientists. If a model can’t perform well, it won’t get deployed, which is why performance tends to get attention first. The best models must have high helpfulness with low harmfulness. Prohibiting certain model behavior is where the issues come in.
There is a long list of things AI systems should not do—just like there is a long list of things software systems in general should avoid. For AI systems, this list has become siloed, meaning that each harm is addressed by separate teams with separate tools if it is even addressed at all. Companies have teams to address privacy issues, which are typically the responsibility of the chief privacy officer. Cybersecurity issues are addressed by information security teams, which report to the chief information security officer. Bias issues tend to fall on legal teams, who are the most familiar with antidiscrimination laws. Copyright concerns are similarly addressed by lawyers.
But what happens when each of these teams need to work together to manage AI? What tools can they use to align all these requirements to ensure that AI systems do not violate the law or generate harm? Today, companies simply don’t have any good options, which is why it takes so long, and so many manual resources, for companies to holistically manage AI risks. And this is why existing efforts to manage AI risks cannot scale.
What companies need is a single place to manage all of these risks—not simply to ensure high performance but to align against all the legal, compliance, and reputational risks that continue to grow and become more complex.
This need is already apparent in the growing number of AI incidents—just take one look at resources like the AI Incident Database and it’s clear that companies are struggling to manage all these risks. Survey after survey has shown that the companies are aware of this issue, with executives repeatedly identifying risks as one of the main barriers to adopting AI. We have helped companies manage these risks for years, and the main cause is a fragmented approach to managing AI risk, where siloed teams use disconnected tools to keep track of their AI. An alignment platform is the only way for companies to achieve this type of alignment.
Alignment platforms are composed of three main layers:
We are acutely aware of the need for an alignment platform, which is why we built Luminos.AI. Throughout the last decade, we’ve watched as companies adopted AI only to see them stumble—sometimes with great consequences—in managing the requirements placed on these systems. The more AI models a company has, in fact, the harder it usually is for them to manage risks across all of these systems and the more resource intensive, and confusing, their risk management efforts become.
Luminos.AI was built from the ground up to manage these three essential layers of alignment.
The first, the workflow management layer, allows all the teams involved in building and deploying AI models to collaborate and standardize their efforts to drive efficiencies and enable their AI adoption to scale. It is not uncommon, for example, for customers to have months-long waits for every model that needs to be approved for deployment—some customers have waiting times of over a year! (Although this appears to be an outlier; more typical waiting times range from two to six months.) But these periods add up, meaning that the more AI systems a company wants to deploy, the worse this approval workflow process becomes. With the Luminos workflow management layer, companies can automatically review and approve AI systems, enabling them to deploy hundreds of systems when previously this was not possible.
The second, the Luminos analysis and validation layer, allows for risks to be quantified and aligned, meaning that companies no longer have to debate over and manually approve which tests to apply to each model. Without this layer, selecting and running the right tests adds weeks or months to alignment efforts, which is why we embed a range of different tests directly into the platform to enable testing at scale. Tests can also be customized so that teams can modify quantitative testing when they need to.
This layer is even more important for generative AI systems, which produce a volume of output that no human can possibly review. Our solution to this problem is to use our own AI systems—some of which are general, some of which are industry specific, and all of which can be fine-tuned to each customer’s data—to monitor and score every model output at an individual level to ensure alignment. This means that companies can obtain and store granular, transparent, and auditable alignment scores for every legal and risk requirement they need, every time a user interacts with a model.
Critically, measurements of both predictive and generative AI systems can be easily leveraged to mitigate the model’s risks through refining or fine-tuning the model to address specific harms. The platform is simple to integrate into model CI/CD pipelines to monitor alignment over time as both model outputs and risk profiles change, informing everyone with a stake in the system, not just the model team.
The Luminos reporting layer is what proves that alignment has been successfully implemented. This consists of automatically generated reports, written in plain English and footnoted as if carefully put together by a human, to summarize how each AI system has been thoroughly managed for risks. We have seen customers use these reports to demonstrate compliance and even stave off lawsuits when needed. Always accessible when needed, these reports are automatically stored as a system of record and kept up-to-date as each model is in use.
Finally, the platform provides an open API that other tools can build upon and supports standard file formats, enabling flexibility and integration throughout the wide range of applications used by AI systems. Tests, rules, and reporting are highly configurable, allowing each customer to adapt the system to their requirements - even allowing the integration of custom testing with our analysis and validation layer.
The AI alignment platform is a new architecture and approach to ensuring AI systems behave as they should and are aligned with the right laws and the right values. Without AI alignment, companies cannot adopt AI. AI alignment platforms hold the key to this adoption, and Luminos.AI is leading the way.