The L-BAM library comprises information about pre-trained models. The models can be called with textPredict(), textAssess() or textClassify() like this:

library(text)

# Example calling a model using the URL
textPredict(
  model_info = "facebook_valence",
  texts = "what is the valence of this text?"
)


# Example calling a model having an abbreviation
textClassify(
  model_info = "implicit_power_fine_tuned_roberta",
  texts = "It looks like they have problems collaborating."
)

The text prediction functions can be given a model and a text, and automatically transform the text to word embeddings and produce estimated scores or probabilities.

If you want to add a pre-trained model to the L-BAM library, please fill out the details in this Google sheet and email us (oscar [ d_o t] kjell [a _ t] psy [DOT] lu [d_o_t]se) so that we can update the table online.

Note that you can adjust the width of the columns when scrolling the table.

Construct_Concept_Behaviours
Outcome
Language
Language_type
Level
N_training
N_evaluation
Source
Participants_training
Participants_evaluation
Label_types
Language_domain_distribution
Open_data
Model_type
Features
Validation_metric1
N_fold_cv_accuracy.1
Held_out_accuracy.1
Held_out_accuracy.2
SEMP_accuracy.1
Other_metrics_n_fold_cv
Other_metrics_held_out
Other_metrics_SEMP
Other_evaluation
Ethical_approval
Ethical_statement
Reference
Date
Contact_details
License
Study_type
Original
Miscellaneous
Command_info
Name
Name_description
Path
Model_Type
depression (8)
anxiety (8)
valence (2)
implicit need for power (4)
implicit need for achievement (4)
implicit need for affiliation (4)
harmony in life (4)
satisfaction with life (4)
balance vs harmony (2)

References

Gu, Kjell, Schwartz & Kjell. (2024). Natural Language Response Formats for Assessing Depression and Worry with Large Language Models: A Sequential Evaluation with Model Pre-registration.

Kjell, O. N., Sikström, S., Kjell, K., & Schwartz, H. A. (2022). Natural language analyzed with AI-based transformers predict traditional subjective well-being measures approaching the theoretical upper limits in accuracy. Scientific reports, 12(1), 3918.

Nilsson, Runge, Ganesan, Lövenstierne, Soni & Kjell (2024) Automatic Implicit Motives Codings are at Least as Accurate as Humans’ and 99% Faster