|
18 | 18 | """ |
19 | 19 |
|
20 | 20 | CAPSULE_SYSTEM_PROMPT_QUERY = """ |
21 | | -You are an expert data scientist. |
22 | | -Your task is to create a comprehensive Jupyter notebook named 'notebook.ipynb' that thoroughly analyzes data to answer a user query |
23 | | -The notebook should contain all necessary artifacts (plots, tables, print outputs, code commentary) to fully answer the query. |
| 21 | +You are an expert bioinformatician and seasoned biological data scientist. |
| 22 | +Your task is to create a comprehensive Jupyter notebook named 'notebook.ipynb' that analyzes data to answer a user query. |
| 23 | +The notebook should contain all necessary artifacts (plots, tables, print outputs) to fully answer these questions. |
| 24 | +Take your time to think through the question and the data before writing any code, explore the data rigorously and defend your conclusions rigorously. |
24 | 25 | """ |
25 | 26 |
|
26 | 27 | # Guidelines for R code output optimization |
27 | | -R_OUTPUT_RECOMMENDATION_PROMPT = """ |
28 | | -R-Specific Guidelines: |
| 28 | +R_SPECIFIC_GUIDELINES = """Guidelines for using the R programming language: |
29 | 29 | 1. Load packages using this format to minimize verbose output: |
30 | 30 | ```r |
31 | 31 | if (!requireNamespace("package_name", quietly = TRUE)) {{ |
32 | 32 | install.packages("package_name") |
33 | 33 | }} |
34 | 34 | suppressPackageStartupMessages(library(package_name)) |
35 | 35 | ``` |
| 36 | +2. You must use the tidyverse wherever possible: dplyr, tidyr, ggplot2, readr, stringr, forcats, purrr, tibble, and lubridate. |
36 | 37 |
|
37 | | -2. For data operations, suppress messages about column name repairs: |
38 | | - ```r |
39 | | - variable_name <- read_excel("<fpath>.csv", col_names = FALSE, .name_repair = "minimal") |
40 | | - ``` |
| 38 | +3. All plots must be made using ggplot2. Here is an example of how to make a plot: |
| 39 | +
|
| 40 | + # Create a density scatter plot of FSC-A vs SSC-A |
| 41 | +plot_data <- as.data.frame(dmso_data[, c("FSC-A", "SSC-A")]) |
| 42 | +scatter_plot <- ggplot2::ggplot(plot_data, ggplot2::aes(x = `FSC-A`, y = `SSC-A`)) + |
| 43 | + ggplot2::geom_hex(bins = 100) + |
| 44 | + ggplot2::scale_fill_viridis_c(trans = "log10") + |
| 45 | + ggplot2::labs( |
| 46 | + title = "FSC-A vs SSC-A Density Plot (DMSO Control)", |
| 47 | + x = "FSC-A", |
| 48 | + y = "SSC-A" |
| 49 | + ) + |
| 50 | + ggplot2::theme_minimal() |
| 51 | +
|
| 52 | +3. Use explicit namespace qualification for functions. For example, use dplyr::select() instead of select(). |
41 | 53 |
|
42 | | -3. When printing dataframes, always wrap them in print() statements: |
| 54 | +4. For data operations, suppress messages about column name repairs: |
43 | 55 | ```r |
44 | | - print(head(dataframe)) |
| 56 | + variable_name <- read_excel("<fpath>.csv", col_names = FALSE, .name_repair = "minimal") |
45 | 57 | ``` |
46 | 58 | """ |
47 | 59 |
|
|
54 | 66 | - Check dataframe shapes before printing. Use head() for large dataframes. |
55 | 67 | - Ensure each cell executes successfully before moving to the next. |
56 | 68 | - Assume you already have the packages you need installed and only install new ones if you receive errors. |
57 | | -- If you need to install packages, use mamba or conda. |
58 | | -IMPORTANT: R vs Python vs bash |
59 | | -- You can use either Python, R or bash cells to complete the analysis. |
60 | | -- All cells are by default Python cells. However, you can use both bash and R cells by adding %%bash or %%R to the first line of the cell. |
61 | | -- The first cell has already been loaded with %load_ext rpy2.ipython so you can use %%R cells from the second cell onwards |
| 69 | +- If you need to install packages, use pip or mamba. |
| 70 | +- All cells are by default {language} cells. Use {language} or bash tools for all analysis. |
| 71 | +- You can use bash cells by adding %%bash to the first line of the cell or running a subprocess. |
| 72 | +- You can only create code cells, no markdown cells. |
62 | 73 | """ |
63 | 74 |
|
| 75 | + |
64 | 76 | AVOID_IMAGES = """ |
65 | 77 | AVOID USING PLOTS/IMAGES. USE TABLES AND PRINT OUTPUTS INSTEAD AS MUCH AS POSSIBLE. |
66 | 78 | """ |
|
101 | 113 | CHAIN_OF_THOUGHT_AGNOSTIC = """ |
102 | 114 | Follow these steps to create your notebook, using chain-of-thought reasoning at each stage: |
103 | 115 |
|
104 | | -1. List Directory Contents: |
105 | | -<analysis_planning> |
106 | | -- Consider how to use the list_workdir tool to recursively list the directory contents. |
107 | | -- Think about how to organize and present this information clearly in the notebook. |
108 | | -- List potential challenges in interpreting the directory structure. |
109 | | -- Consider how the directory structure might inform your approach to the analysis. |
110 | | -</analysis_planning> |
111 | | -Place the output of the list_workdir tool inside <directory_contents> tags. |
112 | | -
|
113 | | -2. Load Data and Perform Descriptive Statistics: |
| 116 | +1. Load Data and Perform Descriptive Statistics: |
114 | 117 | <analysis_planning> |
115 | | -- Identify which data files are most relevant to resolving the task. List these files. |
116 | | -- Plan how to load these files efficiently in R or Python. |
| 118 | +- Identify which data files are most relevant to resolving the task. |
| 119 | +- Plan how to load these files efficiently in {language}. |
117 | 120 | - List the specific descriptive statistics you plan to use (e.g., summary(), str(), head()). |
118 | 121 | - Consider potential issues like missing data or unexpected formats. How will you handle each? |
119 | 122 | - Plan how to present this information clearly in the notebook. |
|
122 | 125 | </analysis_planning> |
123 | 126 | Execute your plan to load data and perform descriptive statistics. |
124 | 127 |
|
125 | | -3. Develop Analysis Plan: |
| 128 | +2. Develop Analysis Plan: |
126 | 129 | <analysis_planning> |
127 | 130 | - Break down each task into testable components. List these components. |
128 | 131 | - For each component, list appropriate statistical tests or visualizations. |
|
135 | 138 | </analysis_planning> |
136 | 139 | Write out your analysis plan as comments in the notebook. |
137 | 140 |
|
138 | | -4. Execute Analysis Plan: |
| 141 | +3. Execute Analysis Plan: |
139 | 142 | <analysis_planning> |
140 | | -- For each step in your analysis plan, list the R, Python or bash functions and libraries you'll use. |
| 143 | +- For each step in your analysis plan, list the {language} or bash functions and libraries you'll use. |
141 | 144 | - Think about how to structure your code for readability and efficiency. |
142 | 145 | - Plan how to document your code with clear comments. |
143 | 146 | - Consider how to present results clearly, using tables or visualizations where appropriate. |
|
147 | 150 | </analysis_planning> |
148 | 151 | Execute your analysis plan, creating new cells as needed. |
149 | 152 |
|
150 | | -5. Conclude and Submit Answer: |
| 153 | +4. Conclude and Submit Answer: |
151 | 154 | <thought_process> |
152 | 155 | - Reflect on how your results relate to the original task. |
153 | 156 | - Consider any limitations or uncertainties in your analysis. |
|
163 | 166 | [Use the submit_answer tool to submit your final answer as a single string either "True" or "False"] |
164 | 167 | Remember, the final notebook should contain all necessary artifacts (plots, tables, print outputs) to solve the task provided. |
165 | 168 | """ |
| 169 | +SUBMIT_ANSWER_SINGLE = """ |
| 170 | +[Use the submit_answer tool to submit your final answer as a single string] |
| 171 | +Example output: |
| 172 | +``` |
| 173 | +submit_answer("CD94") or submit_answer("-1.23") |
| 174 | +``` |
| 175 | +Remember, the final notebook should contain all necessary artifacts (plots, tables, print outputs) to solve the task provided. |
| 176 | +""" |
166 | 177 | SUBMIT_ANSWER_OPEN = """ |
167 | 178 | [Use the submit_answer tool to submit your final answer as a jsondictionary with keys as the question number and values as a short answer] |
168 | 179 | Example output: |
|
200 | 211 | {CHAIN_OF_THOUGHT_AGNOSTIC} |
201 | 212 | {SUBMIT_ANSWER_HYPOTHESIS} |
202 | 213 | {GENERAL_NOTEBOOK_GUIDELINES} |
203 | | -{R_OUTPUT_RECOMMENDATION_PROMPT} |
| 214 | +{R_SPECIFIC_GUIDELINES} |
204 | 215 | """ |
205 | 216 | # MCQ |
206 | 217 | MCQ_PROMPT_TEMPLATE = f""" |
|
212 | 223 | {CHAIN_OF_THOUGHT_AGNOSTIC} |
213 | 224 | {SUBMIT_ANSWER_MCQ} |
214 | 225 | {GENERAL_NOTEBOOK_GUIDELINES} |
215 | | -{R_OUTPUT_RECOMMENDATION_PROMPT} |
| 226 | +{R_SPECIFIC_GUIDELINES} |
216 | 227 | """ |
217 | 228 | # Open answer |
218 | 229 | OPEN_PROMPT_TEMPLATE = f""" |
|
225 | 236 | {CHAIN_OF_THOUGHT_AGNOSTIC} |
226 | 237 | {SUBMIT_ANSWER_OPEN} |
227 | 238 | {GENERAL_NOTEBOOK_GUIDELINES} |
228 | | -{R_OUTPUT_RECOMMENDATION_PROMPT} |
| 239 | +{R_SPECIFIC_GUIDELINES} |
| 240 | +""" |
| 241 | + |
| 242 | +CONTINUATION_PROMPT_TEMPLATE = f""" |
| 243 | +{GENERAL_NOTEBOOK_GUIDELINES} |
| 244 | +
|
| 245 | +You have been provided with a notebook previously generated by an agent based on a user's research question. |
| 246 | +
|
| 247 | +This was the user's research question: |
| 248 | +<previous_research_question> |
| 249 | +{{previous_research_question}} |
| 250 | +</previous_research_question> |
| 251 | +
|
| 252 | +This was the final answer generated by the previous agent: |
| 253 | +<previous_final_answer> |
| 254 | +{{previous_final_answer}} |
| 255 | +</previous_final_answer> |
| 256 | +
|
| 257 | +The user has now tasked you with addressing a new query: |
| 258 | +<query> |
| 259 | +{{query}} |
| 260 | +</query> |
| 261 | +
|
| 262 | +Please make any edits required to the notebook and the answer to address the new query. Be extremely diligent and ensure that the notebook is fully updated to address the new query. |
| 263 | +Note you may have to run all cells one by one again if the user query involved updating one of the intermediate cells and subsequent cells depend on it. |
| 264 | +Once you have updated the notebook, use the submit_answer tool to submit your final answer once the user's query is addressed. |
229 | 265 | """ |
0 commit comments