WARNING: THIS SITE IS A MIRROR OF GITHUB.COM / IT CANNOT LOGIN OR REGISTER ACCOUNTS / THE CONTENTS ARE PROVIDED AS-IS / THIS SITE ASSUMES NO RESPONSIBILITY FOR ANY DISPLAYED CONTENT OR LINKS / IF YOU FOUND SOMETHING MAY NOT GOOD FOR EVERYONE, CONTACT ADMIN AT ilovescratch@foxmail.com
Skip to content

Conversation

@cpsievert
Copy link
Contributor

@cpsievert cpsievert commented Dec 10, 2025

Closes #128

Some of the Python changes here are a follow up to #119

@cpsievert cpsievert requested a review from Copilot December 10, 2025 17:22

This comment was marked as resolved.

@cpsievert cpsievert marked this pull request as ready for review December 10, 2025 17:50
@cpsievert cpsievert requested a review from gadenbuie December 10, 2025 17:50
Co-authored-by: Garrick Aden-Buie <[email protected]>
Copy link
Contributor

@gadenbuie gadenbuie left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Good stuff! It's definitely a big step forward for the docs. I've been submitting feedback as I worked through things. I made it through the stuff that shows up in the Python diffs; I'll pick up with the R things in a bit, although I suspect there's some overlap and that some of the Python comments will directly translate to the R docs

This commit addresses all 30 review comments from PR #162, implementing
comprehensive improvements to both R and Python documentation for consistency,
clarity, and better user experience.

## Capitalization Standardization

- Standardized use of "querychat" (lowercase) when referring to the package/product
  in prose throughout all documentation
- Maintained "QueryChat" (camel case) for Python class names in code examples
- Maintained "QueryChat" (camel case) when referring to class/instances in narrative
- Fixed overcorrections to ensure Python class name remains properly capitalized
- Files affected:
  - R: vignettes/tools.Rmd, vignettes/context.Rmd, README.md
  - Python: index.qmd, context.qmd, build.qmd, tools.qmd, models.qmd,
    greet.qmd, data-sources.qmd, _examples/*.py

## Grammar and Language Fixes

- Fixed "up vote" → "upvote" in both Python and R build documentation
- Removed unnecessary words: "In this case", "(safely)"
- Clarified LLM vs querychat roles: "The LLM generates SQL, querychat executes it"
- Improved sentence structure and flow throughout

## Content Improvements

### Introduction/README Changes (R & Python)
- Changed "For analysts" → "For users" (more inclusive)
- Rewrote developer section in second person for directness
- Made benefits more specific and less generic

### Python index.qmd Enhancements
- Fixed "VSCode" → "VS Code" (official branding)
- Mentioned Positron first, then VS Code
- Clarified that saving to file is optional (can run in console)
- Added QUERYCHAT_CLIENT environment variable example
- Simplified code example by removing explicit client parameter

### Context Documentation Restructuring (R & Python)
- Reorganized intro to be more linear:
  1. What querychat automatically gathers
  2. LLMs don't see actual data
  3. Three ways to customize system prompt
- Moved system prompt definition to footnote (Python) or parenthetical (R)
- Made it clearer that customization is optional enhancement

## Structural Improvements

### Python build.qmd Quarto Enhancements
- Extracted inline app code to separate, runnable files:
  - pkg-py/docs/_examples/titanic-dashboard.py
  - pkg-py/docs/_examples/multiple-datasets.py
- Replaced HTML <details>/<summary> with Quarto code-fold feature
- Used Quarto include syntax for cleaner documentation
- Apps can now be run and tested independently

### Site Tagline
- Reverted docs/index.html tagline to original "Chat with your data in any language"
- Original is more inviting and covers both R/Python + multilingual LLM support
- Fixed capitalization in description text

## Files Changed

Modified (10):
- docs/index.html
- pkg-py/docs/build.qmd
- pkg-py/docs/context.qmd
- pkg-py/docs/data-sources.qmd
- pkg-py/docs/index.qmd
- pkg-py/docs/tools.qmd
- pkg-r/README.md
- pkg-r/vignettes/build.Rmd
- pkg-r/vignettes/context.Rmd
- pkg-r/vignettes/tools.Rmd

Added (2):
- pkg-py/docs/_examples/multiple-datasets.py
- pkg-py/docs/_examples/titanic-dashboard.py

Statistics: 12 files changed, 184 insertions(+), 179 deletions(-)

All changes maintain consistency between R and Python documentation while
respecting their different documentation systems (R Markdown vs Quarto).
)
```

While `querychat_app()` provides a quick way to start exploring data, building bespoke Shiny apps with QueryChat unlocks the full power of integrating natural language data exploration with custom visualizations, layouts, and interactivity. This guide shows you how to integrate QueryChat into your own Shiny applications and leverage its reactive data outputs to create rich, interactive dashboards.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think it's a good idea to start with the simple template, but this article in general assumes you're starting from scratch to build a Shiny app that wraps querychat.

I think it'd be useful to talk about what kinds of apps make good querychat apps up front, which would also help people who have an existing app they want to bring querychat into.

This was the approach that I took in the Programming with LLMs workshop, some of my slides might help: https://posit-conf-2025.github.io/llm/slides/slides-10.html#/querychat

The general idea is to acknowledge that apps that have a single data source plus a bunch of filters that combine to create a reactive data frame that is used in a lot of different places is probably the best use case for a querychat powered Shiny app. (That's the idea with the first two diagrams, at least.)


querychat automatically gathers information about your table to help the LLM write accurate SQL queries. This includes column names and types, numerical ranges, and categorical value examples. (All of this information is provided to the LLM as part of the **system prompt** -- a string of text containing instructions and context for the LLM to consider when responding to user queries.)

Importantly, the LLM never sees the actual data itself -- it doesn't need to in order to write SQL queries for you. It only needs to understand the structure and schema of your data.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Let's flip this and say something like "we are not sending your raw data to the LLM and asking it to do complicated math".

There's nuance in the "never sees the actual data itself" that I think we're better off avoiding and we can explain the fundamentals without having to contradict ourselves.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Could also be good to point to an article here, something about how LLMs are bad at math, like Wes' recent post.


You can also connect `querychat` directly to any database supported by [DBI](https://dbi.r-dbi.org/). This includes popular databases like SQLite, DuckDB, PostgreSQL, MySQL, and many more.

Assuming you have a database set up and accessible, you can create a DBI connection and pass it to `QueryChat$new()`. Below are some examples for common databases.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The text says QueryChat$new() but then the examples use querychat_app(). That makes sense, but should be called out to avoid confusion. Or we could switch to querychat() in all the places.

btw, I kind of prefer we talk about querychat() on the R side. Not strongly enough to say we should rewrite all of this to do that, but I think it's likely the more convenient interface for R users

Comment on lines +136 to +140
# Or from CSV
dbExecute(con, "
CREATE TABLE my_table AS
SELECT * FROM read_csv_auto('path/to/your/file.csv')
")
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'm pretty certain the duckdb R package has a friendlier function that directly does this


## Provide a greeting

When the querychat UI first appears, you will usually want it to greet the user with some basic instructions. By default, these instructions are auto-generated every time a user arrives. In a production setting with multiple users/visitors, this is slow, wasteful, and non-deterministic. Instead, you should create a greeting file and pass it when creating your `QueryChat` object:
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

this is slow, wasteful, and non-deterministic.

This still sounds a little aggressive to me. It's fine but it could be better. Or maybe it's what you want?

When the querychat UI first appears, you will usually want it to greet the user with some basic instructions. By default, these instructions are auto-generated every time a user arrives. In a production setting with multiple users/visitors, this is slow, wasteful, and non-deterministic. Instead, you should create a greeting file and pass it when creating your `QueryChat` object:

```{r}
querychat_app(
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think it'd be better to use querychat() here because we're talking about "in production". You could add a qc$app() line below, maybe in a comment, as a way to quickly test


## Specify a model

To use a particular model, pass a `"{provider}/{model}"` string to the `client` parameter. Under the hood, this gets passed along to `ellmer::chat()`:
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

"under the hood" used twice pretty close to each other

To use a particular model, pass a `"{provider}/{model}"` string to the `client` parameter. Under the hood, this gets passed along to `ellmer::chat()`:

```{r}
querychat_app(penguins, client = "anthropic/claude-sonnet-4-5")
Copy link
Contributor

@gadenbuie gadenbuie Dec 17, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Again, querychat_app() is convenient for users to test, but I think querychat() is a better balance for most use cases because it can be both simple and fit in to the other more complicated examples.

In other words, querychat() composes with other tasks in the vignettes but querychat_app() is a programmatic dead-end.

- Claude 4.5 Sonnet
- Google Gemini 3.0

In our testing, we've found that those models strike a good balance between accuracy and latency. Smaller/cheaper models like GPT-4o-mini are fine for simple queries but make surprising mistakes with more complex ones; and reasoning models like o3-mini slow down responses without providing meaningfully better results.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Interesting, I would have said all the "fast" models are pretty good, at least for most tables, i.e. I haven't had bad experiences with gpt-4.1-mini (even gpt-4.1-nano is okay) or claude-haiku-4-5.

I guess I'd recommend encouraging people to try out the smaller faster models first and to switch if they don't work well for the data set. (Personally, I'd be turned off from even trying the smaller models by the language "make surprising mistakes with more complex ones".)


In our testing, we've found that those models strike a good balance between accuracy and latency. Smaller/cheaper models like GPT-4o-mini are fine for simple queries but make surprising mistakes with more complex ones; and reasoning models like o3-mini slow down responses without providing meaningfully better results.

We've also seen some decent results with frontier local models, but even if you have the compute to run the largest models, they still tend to lag behind the cloud-hosted options in terms of accuracy and speed.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Could call out gpt-oss:20b maybe?


![](../reference/figures/quickstart-summary.png){alt="Screenshot of the querychat's app with a summary statistic inlined in the chat." class="shadow rounded"}

## View the source
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Might need a new section for advanced use cases now that you can also use $client() to create a client with these tools with custom callbacks outside of a Shiny context

Copy link
Contributor

@gadenbuie gadenbuie left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Okay, I've read through the R vignettes. Not carefully enough to have found every typo, but well enough to give general feedback. It's a huge improvement and I really like how you've organized the topics!

Co-authored-by: Garrick Aden-Buie <[email protected]>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

(R) Update website

2 participants