--- title: "Introduction to the 'baserater' package" output: rmarkdown::html_vignette vignette: > %\VignetteIndexEntry{Introduction to the 'baserater' package} %\VignetteEngine{knitr::rmarkdown} %\VignetteEncoding{UTF-8} --- The `baserater` package allows you to: - Download a database of group–adjective pairs annotated with stereotype strength scores generated by large language models ('GPT-4' and 'LLaMA 3.3-70B'); - Generate new typicality ratings using a large language model served through any 'Inference Provider' API (e.g., 'Together AI' or 'Fireworks') of your choice, with customizable prompts and parameters; - Evaluate newly generated typicality ratings against human ground truth (ratings collected from Prolific participants) and benchmark them against baseline models; - Automatically build a new base-rate item database from a group x description typicality matrix. It is designed to streamline the creation of base-rate neglect items for reasoning experiments. A base-rate neglect item typically involves two groups (e.g., "engineers" and "construction workers") and a descriptive trait (e.g., "nerdy"). Participants are presented with statistical information (base-rates; e.g., "There are 995 construction workers and 5 engineers") and stereotypical information (the descriptive trait). Their task is to decide the most likely group membership of an individual described by that trait. The "typicality rating" generated by large language models quantifies how strongly certain traits (e.g., "nerdy," "kind") or descriptions are (stereo)typically associated with specific groups (e.g., engineers, nurses). This allows researchers to precisely measure and control "stereotype strength"–the extent to which a given description is perceived as belonging more strongly to one group over another (e.g., the trait "nerdy" is typically seen as more characteristic of engineers than of construction workers). To learn more about the theoretical framework and validation studies underlying the `baserater` package, see the paper: *Using Large Language Models to Estimate Belief Strength in Reasoning* (Beucler et al., Forthcoming). ```{r, include = FALSE} knitr::opts_chunk$set( collapse = TRUE, comment = "#>", message = FALSE ) ``` # 0. Setup ```{r setup, warning = FALSE, message = FALSE} library(baserater) library(tidyverse) library(knitr) ``` # 1. Downloading the Data You can begin by downloading the article datasets using `download_data()`. Use `download_data()` to retrieve either: - The full base-rate item database (from 'GPT-4' and 'LLaMA 3.3'); - The validation ratings, which include human typicality scores and those generated by 'GPT-4' and 'LLaMA 3.3' on 100 group–adjective pairs. - The two typicality matrices (from 'GPT-4' and 'LLaMA 3.3') that were used to generate the base-rate item database. - The group and adjective material used to build the database. ```{r} # Load the base-rate database database <- download_data("database") # Load the typicality validation ratings ratings <- download_data("validation_ratings") # Load the typicality matrices gpt4_matrix <- download_data("typicality_matrix_gpt4") llama3_3_matrix <- download_data("typicality_matrix_llama3.3") # Load the group-adjective pairs material <- download_data("material") ``` # 2. Generating New Typicality Ratings from LLMs You can generate new typicality ratings using any 'Inference Provider' API (e.g., 'Together AI', 'Hugging Face' Inference, 'Fireworks') via `generate_typicality()`. This feature is experimental, as it depends on the provider’s API compatibility. See the [Inference Provider Setup](#inference-provider-setup) section at the end of this vignette for details on how to configure your API endpoint and authentication token. The function works by sending structured prompts to a large language model and parsing numeric outputs (between 0 and 100) that reflect how well a description (e.g., an adjective) fits a given group. By default, the function uses the same prompt and generation parameters described in the paper. As in the paper, we output several ratings from the same model and average them to obtain a final score. The function allows you to specify the (minimum) number of responses to generate. Note that the function `generate_typicality()` can be quite slow, especially if you are generating many scores. ```{r, message=FALSE, warning = FALSE, echo = FALSE} # Load pre-generated scores new_scores <- readRDS(system.file("extdata", "new_typicality_scores_llama3.1_8B.rds", package = "baserater")) ``` Here is an example of function call: ```{r, eval = FALSE} # Original prompt from the paper original_system_prompt_content <- "You are expert at accurately reproducing the stereotypical associations humans make, in order to annotate data for experiments. Your focus is to capture common societal perceptions and stereotypes, rather than factual attributes of the groups, even when they are negative or unfounded." original_user_prompt_content_template <- "Rate how well the adjective '{description}' reflects the prototypical member of the group '{group}' on a scale from 0 ('Not at all') to 100 ('Extremely'). To clarify, consider the following examples: 1. 'Rate how well the adjective FUNNY reflects the prototypical member of the group CLOWN on a scale from 0 (Not at all) to 100 (Extremely).' A high rating is expected because the adjective 'FUNNY' closely aligns with the typical characteristics of a 'CLOWN'. 2. 'Rate how well the adjective FEARFUL reflects the prototypical member of the group FIREFIGHTER on a scale from 0 (Not at all) to 100 (Extremely).' A low rating is expected because the adjective 'FEARFUL' diverges significantly from the typical characteristics of a 'FIREFIGHTER'. 3. 'Rate how well the adjective PATIENT reflects the prototypical member of the group ENGINEER on a scale from 0 (Not at all) to 100 (Extremely).' A mid-scale rating is expected because the adjective 'PATIENT' neither closely aligns nor diverges significantly from the typical characteristics of an 'ENGINEER'. Your response should be a single score between 0 and 100, with no additional text, letters, or symbols included." # Example using the validation ratings groups <- ratings$group descriptions <- ratings$adjective api_token <- Sys.getenv("PROVIDER_API_TOKEN") new_scores <- generate_typicality( groups = groups, descriptions = descriptions, api_url = "https://api.together.xyz/v1/chat/completions", # example for 'Together AI' API api_token = api_token, model = "meta-llama/Meta-Llama-3.1-8B-Instruct-Turbo", # model name on Together AI n = 3, # number of responses to generate min_valid = 2, # minimum number of valid responses; mean of valid ones is used max_tokens = 3, # numeric output between 0 and 100 retries = 2, # number of retries in case of API errors matrix = FALSE, return_raw_scores = TRUE, return_full_responses = TRUE, verbose = TRUE) ``` Here is what the output looks like. Note that we could also see the occasional error messages from the 'Inference Provider' API (e.g., too many requests) since we set `return_full_responses` to `TRUE`. ```{r, warning = FALSE} knitr::kable(head(new_scores)) ``` We can also look at the distribution of the new typicality ratings generated by 'LLaMA 3.1-8B-Instruct': ```{r} # Distribution of new typicality ratings ggplot(new_scores, aes(x = mean_score)) + geom_histogram(binwidth = 5, fill = "steelblue", color = "white") + labs( title = "Distribution of Typicality Ratings from 'LLaMA 3.1-8B-Instruct'", x = "Typicality Rating", y = "Count" ) + theme_classic() ``` The `generate_typicality()` function supports two modes: - `matrix = TRUE` (default): Computes a cross-product of unique groups and descriptions. Returns a list with matrices of scores and responses. - `matrix = FALSE`: Computes row-by-row scores for the group–adjective pairs you supply. Returns a tibble. See `?generate_typicality` for full documentation and customization options. # 3. Evaluating New Ratings You can then assess how well a new model or scoring method captures group–adjective typicality by comparing your ratings to the human ground truth and benchmark models ('GPT-4' and 'LLaMA 3.3'). To do that, you need typicality ratings for the 100 validation items stored in a data frame with three columns: group, adjective, and rating. We will use the `new_scores` data frame we generated earlier. ```{r, warning = FALSE} # Create a data frame with the same structure as the validation set new_scores = new_scores %>% mutate(adjective = description, rating = mean_score) %>% select(group, adjective, rating) knitr::kable(head(new_scores)) ``` First, let’s examine the correlation between the new scores and the human ratings visually: ```{r} # Join human and model scores comparison_df <- left_join( ratings %>% select(group, adjective, human = mean_human_rating), new_scores, by = c("group", "adjective") ) # Scatterplot ggplot(comparison_df, aes(x = rating, y = human)) + geom_point(alpha = 0.6) + geom_smooth(method = "lm", se = FALSE, color = "darkred") + labs( title = "Scatterplot of 'LLaMA 3.1' and Human Typicality Ratings", y = "Average Human Rating", x = "Average 'LLaMA 3.1' Rating" ) + theme_classic() ``` Use `evaluate_external_ratings()` to compute correlations and display comparisons with our LLMs baseline: ```{r} # Print correlation summary with human ground truth and baselines knitr::kable(evaluate_external_ratings(new_scores)) # Optionally store the output in a variable results <- evaluate_external_ratings(new_scores) ``` As you can see, the smaller and older Llama-3.1-8B-Instruct does not perform as our baseline models ('GPT-4' and 'LLaMA 3.3-70B'). # 4. Constructing Base-Rate Items from Typicality Matrices To compute stereotype strength using `extract_base_rate_items()`, you’ll need a typicality matrix—a table of scores where rows correspond to groups and columns correspond to descriptions (e.g., adjectives). Each cell represents how typical a description is for a group. For example, you can load the 'GPT-4' typicality matrix using download_data() (the same is available for 'LLaMA 3.3'): ```{r, warning = FALSE} #' The typicality matrix from 'GPT-4' is a data frame with group–adjective pairs and their typicality scores gpt4_matrix <- download_data("typicality_matrix_gpt4") knitr::kable(head(gpt4_matrix)) ``` Extract base-rate items by applying the function: ```{r} #' Extract base-rate items from the typicality matrix base_rate_items <- extract_base_rate_items(gpt4_matrix) ``` Note that the function `extract_base_rate_items()` can take some time to run if you are using a large matrix, as this will create a very large number of items (e.g., here, around 110,000 base-rate items). You can then explore or filter the output, e.g., to view the strongest stereotypes: ```{r, warning = FALSE} # View top base-rate items by stereotype strength knitr::kable(base_rate_items %>% arrange(desc(StereotypeStrength)) %>% head(10)) ``` You can also subset the database to focus on a specific adjective or group. For example, you can visualize the stereotype strength of the adjective "selfish" across group combinations: ```{r} # Pick one adjective and extract group typicality scores df <- gpt4_matrix %>% select(group, selfish) %>% rename(score = selfish) %>% arrange(desc(score)) # sort by how typical the group is # Save group names and their scores group_order <- df$group typ_values <- df$score names(typ_values) <- df$group # Build all group pairs and compute log-ratios res_df <- expand.grid( g1 = group_order, g2 = group_order, KEEP.OUT.ATTRS = FALSE ) %>% mutate( typ1 = typ_values[as.character(g1)], typ2 = typ_values[as.character(g2)], log_ratio = log(pmax(typ1, 1e-9) / pmax(typ2, 1e-9)) ) %>% mutate( g1 = factor(g1, levels = group_order), g2 = factor(g2, levels = group_order) ) # Keep only pairs where g1 is ranked higher than g2 and log-ratio is positive res_df <- res_df %>% filter(as.integer(g1) < as.integer(g2), log_ratio > 0) # Add identity pairs (g1 == g2) with NA diag_df <- tibble( g1 = factor(group_order, levels = group_order), g2 = factor(group_order, levels = group_order), log_ratio = NA_real_ ) # Combine with original filtered upper-triangle pairs res_df <- bind_rows(res_df, diag_df) # Plot the heatmap ggplot(res_df, aes(x = g2, y = g1, fill = log_ratio)) + geom_tile(color = "white") + scale_fill_gradient2( low = "steelblue", mid = "white", high = "firebrick", midpoint = 0, na.value = "grey90", name = "Log(Group 1 / Group 2)", ) + labs( title = "Stereotype Strength for Adjective 'Selfish'", x = "Group 2", y = "Group 1" ) + theme_classic() + theme( axis.text.y = element_text(angle = 15, hjust = 1, size = 5), axis.text.x = element_text(angle = 45, hjust = 1, size = 5), panel.grid = element_blank() ) ``` ## Citation If you use `baserater` in your research, please cite: Beucler, J. (2025). *baserater: An R package using large language models to estimate belief strength in reasoning* (Version 0.1.0) [Computer software]. Zenodo. https://doi.org/10.5281/zenodo.15449192 ## Inference Provider Setup The `baserater` package can connect to inference providers such as 'Together AI', 'Hugging Face' Inference, 'Fireworks', and 'Replicate'. These platforms host or serve large language models and allow you to query them through a standard HTTP interface. Here are some useful links to get started: - ['Together AI'](https://api.together.xyz/) - ['Hugging Face' Inference Endpoints](https://huggingface.co/docs/inference-endpoints) - ['Fireworks AI'](https://fireworks.ai/) - ['Replicate'](https://replicate.com/) To generate new scores using the `generate_typicality()` function, complete the following setup steps: - Obtain your provider’s API URL and token: Optionally, store them as environment variables in R, for example: `Sys.setenv(PROVIDER_API_URL = "https://api.together.xyz/v1/chat/completions")` `Sys.setenv(PROVIDER_API_TOKEN = "your_secret_token")` - Check model availability and license terms: Some models require that you agree to license terms before use. Check the provider’s model catalog for details. - Verify the correct model identifier for your provider: Model names can vary depending on the provider (for example, `"meta-llama/Llama-3.3-70B-Instruct-Turbo"` on 'Together AI' vs. `"meta-llama/Llama-3.3-70B-Instruct"` on 'Hugging Face'). Always use the exact identifier listed in the provider’s documentation.