lares provides convenient wrappers for popular APIs,
making it easy to integrate AI services, financial data, and more into
your R workflows.
lares uses a YAML configuration file to store
credentials:
Create a config.yml file:
default:
openai:
secret_key: "sk-your-openai-key-here"
gemini:
api_key: "your-gemini-key-here"
database:
server: "localhost"
database: "mydb"
uid: "user"
pwd: "password"Set the credentials directory (one-time setup):
Use gpt_prompter() to build better prompts:
Set global defaults via environment variables:
# In .Renviron file:
# LARES_GPT_MODEL=gpt-4
# LARES_GPT_URL=https://api.openai.com/v1/chat/completions
# LARES_GEMINI_API=https://generativelanguage.googleapis.com/v1beta/models/
# Check current settings
Sys.getenv(c("LARES_GPT_MODEL", "LARES_GEMINI_API"))
#> LARES_GPT_MODEL
#> "gpt-3.5-turbo"
#> LARES_GEMINI_API
#> "https://generativelanguage.googleapis.com/v1beta/models/"If you hit rate limits: 1. Add Sys.sleep() between calls
2. Use batch processing 3. Upgrade your API plan
# 1. Load data
data(dft)
# 2. Get AI summary of data structure
prompt <- sprintf(
"Summarize this dataset structure: %d rows, columns: %s",
nrow(dft),
paste(colnames(dft), collapse = ", ")
)
summary <- gpt_ask(prompt)
# 3. Get stock data
stocks <- stocks_hist("AAPL", from = Sys.Date() - 30)
# 4. Cache the analysis
analysis <- cache_pipe(
{
gpt_ask(sprintf(
"Analyze this stock: Recent high: $%.2f, Low: $%.2f",
max(stocks$High, na.rm = TRUE),
min(stocks$Low, na.rm = TRUE)
))
},
base = "aapl_analysis"
)
print(analysis)?gpt_ask,
?gemini_ask, ?stocks_hist,
?get_credentials?get_credentials