
[ad_1]
![]() |
I’m joyful to share that Amazon SageMaker Make clear now helps basis mannequin (FM) analysis (preview). As a knowledge scientist or machine studying (ML) engineer, now you can use SageMaker Make clear to judge, examine, and choose FMs in minutes primarily based on metrics similar to accuracy, robustness, creativity, factual information, bias, and toxicity. This new functionality provides to SageMaker Make clear’s current capability to detect bias in ML information and fashions and clarify mannequin predictions.
The brand new functionality gives each automated and human-in-the-loop evaluations for giant language fashions (LLMs) anyplace, together with LLMs accessible in SageMaker JumpStart, in addition to fashions skilled and hosted outdoors of AWS. This removes the heavy lifting of discovering the proper mannequin analysis instruments and integrating them into your growth atmosphere. It additionally simplifies the complexity of attempting to undertake educational benchmarks to your generative synthetic intelligence (AI) use case.
Consider FMs with SageMaker Make clear
With SageMaker Make clear, you now have a single place to judge and examine any LLM primarily based on predefined standards throughout mannequin choice and all through the mannequin customization workflow. Along with automated analysis, it’s also possible to use the human-in-the-loop capabilities to arrange human critiques for extra subjective standards, similar to helpfulness, inventive intent, and magnificence, through the use of your personal workforce or managed workforce from SageMaker Floor Fact.
To get began with mannequin evaluations, you need to use curated immediate datasets which might be purpose-built for frequent LLM duties, together with open-ended textual content technology, textual content summarization, query answering (Q&A), and classification. You may also prolong the mannequin analysis with your personal customized immediate datasets and metrics in your particular use case. Human-in-the-loop evaluations can be utilized for any job and analysis metric. After every analysis job, you obtain an analysis report that summarizes the ends in pure language and contains visualizations and examples. You’ll be able to obtain all metrics and reviews and in addition combine mannequin evaluations into SageMaker MLOps workflows.
In SageMaker Studio, you’ll find Mannequin analysis beneath Jobs within the left menu. You may also choose Consider instantly from the mannequin particulars web page of any LLM in SageMaker JumpStart.
Choose Consider a mannequin to arrange the analysis job. The UI wizard will information you thru the collection of automated or human analysis, mannequin(s), related duties, metrics, immediate datasets, and evaluation groups.
As soon as the mannequin analysis job is full, you’ll be able to view the ends in the analysis report.
Along with the UI, it’s also possible to begin with instance Jupyter notebooks that stroll you thru step-by-step directions on the best way to programmatically run mannequin analysis in SageMaker.
Consider fashions anyplace with the FMEval open supply library
To run mannequin analysis anyplace, together with fashions skilled and hosted outdoors of AWS, use the FMEval open supply library. The next instance demonstrates the best way to use the library to judge a customized mannequin by extending the ModelRunner class.
For this demo, I select GPT-2 from the Hugging Face mannequin hub and outline a customized HFModelConfig
and HuggingFaceCausalLLMModelRunner
class that works with causal decoder-only fashions from the Hugging Face mannequin hub similar to GPT-2. The instance can also be accessible within the FMEval GitHub repo.
!pip set up fmeval
# ModelRunners invoke FMs
from amazon_fmeval.model_runners.model_runner import ModelRunner
# Further imports for customized mannequin
import warnings
from dataclasses import dataclass
from typing import Tuple, Optionally available
import torch
from transformers import AutoModelForCausalLM, AutoTokenizer
@dataclass
class HFModelConfig:
model_name: str
max_new_tokens: int
normalize_probabilities: bool = False
seed: int = zero
remove_prompt_from_generated_text: bool = True
class HuggingFaceCausalLLMModelRunner(ModelRunner):
def __init__(self, model_config: HFModelConfig):
self.config = model_config
self.mannequin = AutoModelForCausalLM.from_pretrained(self.config.model_name)
self.tokenizer = AutoTokenizer.from_pretrained(self.config.model_name)
def predict(self, immediate: str) -> Tuple[Optional[str], Optionally available[float]]:
input_ids = self.tokenizer(immediate, return_tensors="pt").to(self.mannequin.system)
generations = self.mannequin.generate(
**input_ids,
max_new_tokens=self.config.max_new_tokens,
pad_token_id=self.tokenizer.eos_token_id,
)
generation_contains_input = (
input_ids["input_ids"][0] == generations[0][: input_ids["input_ids"].form[1]]
).all()
if self.config.remove_prompt_from_generated_text and never generation_contains_input:
warnings.warn(
"Your mannequin doesn't return the immediate as a part of its generations. "
"`remove_prompt_from_generated_text` does nothing."
)
if self.config.remove_prompt_from_generated_text and generation_contains_input:
output = self.tokenizer.batch_decode(generations[:, input_ids["input_ids"].form[1] :])[0]
else:
output = self.tokenizer.batch_decode(generations, skip_special_tokens=True)[0]
with torch.inference_mode():
input_ids = self.tokenizer(self.tokenizer.bos_token + immediate, return_tensors="pt")["input_ids"]
model_output = self.mannequin(input_ids, labels=input_ids)
likelihood = -model_output[0].merchandise()
return output, likelihood
Subsequent, create an occasion of HFModelConfig
and HuggingFaceCausalLLMModelRunner
with the mannequin data.
Then, choose and configure the analysis algorithm.
Let’s first check with one pattern. The analysis rating is the proportion of factually appropriate responses.
Though it’s not an ideal response, it contains “UK.”
Subsequent, you’ll be able to consider the FM utilizing built-in datasets or outline your customized dataset. If you wish to use a customized analysis dataset, create an occasion of DataConfig
:
The analysis outcomes will return a mixed analysis rating throughout the dataset and detailed outcomes for every mannequin enter saved in a neighborhood output path.
Be a part of the preview
FM analysis with Amazon SageMaker Make clear is on the market at present in public preview in AWS Areas US East (Ohio), US East (N. Virginia), US West (Oregon), Asia Pacific (Singapore), Asia Pacific (Tokyo), Europe (Frankfurt), and Europe (Eire). The FMEval open supply library is on the market on GitHub. To study extra, go to Amazon SageMaker Make clear.
Get began
Log in to the AWS Administration Console and begin evaluating your FMs with SageMaker Make clear at present!
— Antje
[ad_2]
Source link