Code fence tool#165
Open
KeremTurgutlu wants to merge 1 commit intomainfrom
Open
Conversation
a1f1b4d to
2b4bce0
Compare
9c1b39b to
6019634
Compare
Contributor
Author
|
@jph00 In your original implementation, code fence tool is auto activated if lang tools are present in the schema which is also the case here. Also, given that Or we can make the system generic adding callbacks, but not sure if there are other use cases: class FenceToolStop(Stop):
def __init__(self, langs): self.langs = langs
def stop_and_trim(self, text):
"Stop truthy condition optionally return match to trim by"
m = _fence_re.search(text)
if m and m.group(1) in self.langs: return m.group(0)
def after_msgs(self, msgs):
"Optionally postprocess messages in Chat._prep_msg before api call"
return _split_fence_msgs(msgs)
def after_toolcall(self, msgs):
"Optionally postprocess messages in Chat._call during tool calling step befor api call"
m = msgs[-1]
if m.role == 'assistant':
if fence := extract_fence_call(m.content or ''):
lang, code = fence
out = run_fence_tool(lang, code, self.ns)
m.content += out
if stream: yield mk_stream_chunk(content=out, role='assistant')
return msgsThis can be a follow up PR if you think it's a good direction, I couldn't decide. |
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.This suggestion is invalid because no changes were made to the code.Suggestions cannot be applied while the pull request is closed.Suggestions cannot be applied while viewing a subset of changes.Only one suggestion per line can be applied in a batch.Add this suggestion to a batch that can be applied as a single commit.Applying suggestions on deleted lines is not supported.You must change the existing code in this line in order to create a valid suggestion.Outdated suggestions cannot be applied.This suggestion has been applied or marked resolved.Suggestions cannot be applied from pending reviews.Suggestions cannot be applied on multi-line comments.Suggestions cannot be applied while the pull request is queued to merge.Suggestion cannot be applied right now. Please check back later.
This PR adds code fence tool feature which let's AI to write executable markdown code (python and bash) to bypass tool schema parsing. The main motivation is to make tool calling as simple as code generation for the LLM.
Initial details can be found in this spec doc, and here are some implementation details which may or may not be different:
Chat/AsyncChatacross all the test models, and additionally tested with mixed tool calling and code fence tool calling. Meaning that AI can write code fence tools and then call a regular tool in the same tool loop, vice versa.stoptokens argument, so we handle stopping during code fence tool exec via astop_callablesmechanism. During streaming we check the collected text from the chunks so far, and if the stop condition is matched the remaining chunks are yielded as reasoning deltas. All the original chunks are still used (but not streamed) to be able to build the finalModelResponseobject without failures and also to get the correct usage metadata. Only the final text inModelResponseis updated with a trimmed version up until the stop condition._lang2toolis hardcoded and not user configurable. it is used to turn on code fence tool feature by looking forpythonandbashin the tool schema._split_msg_on_fencesis used infmt2hist->mk_msgsin the presence of a code fence tool result. This meansstop_callablesis not generic and expected to be used withFenceToolStop. Same goes for the_active_fence_langscheck which automatically addsFenceToolStoptostop_callableskwargs.pythonandbashtools innsneed to have the following params to work:Misc
display_streamwithChatexamples.