We all talk about the business gains from using large language models, but there are many known issues with these models and finding ways to prevent the answers that a model can provide ways to exercise some control over powerful technologies. Today, at AWS re:Invent in Las Vegas, AWS CEO Adam Selipsky announced Guardrails for Amazon Bedrock.
“With Guardrails for Amazon Bedrock, you can always implement safeguards to provide relevant and secure user experiences that align with your company’s policies and principles,” the company wrote. a blog post this morning.
The new tool allows companies to define and limit the types of languages a model can use, so if someone asks a question that doesn’t really fit the bot you’ve created, it won’t be answered instead. give a very convincing, but wrong answer, or worse, something that hurts and damages a brand.
At its most basic level, the company allows you to find out about unlimited topics for the model, so it does not answer irrelevant questions. As an example, Amazon uses a financial services company, which may avoid allowing the bot to provide investment advice for fear that it may provide inappropriate recommendations that customers may take seriously. A scenario like this might work as follows:
“I define a definable topic with the name “Investment advice” and provide a natural language description, such as “Investment advice refers to questions, guidance, or recommendations about management or allocation of funds or assets with the objective of generating returns or achieving specific financial goals.”
In addition, you can filter specific words and phrases to remove any type of content that may be offensive, while using the filtering powers of different words and phrases to identify in the model it is out of bounds. Finally you can filter PII data to keep private data out of model responses.
The guardrails feature was announced in today’s preview. It will be available to all customers next year.