update
This commit is contained in:
@@ -203,6 +203,7 @@ function planner_mistral_openorca(a::agentReflex)
|
||||
Plan: first you should always think about your conversation with the user and your earlier work thoroughly then extract and devise a complete, step by step plan to achieve your objective (pay attention to correct numeral calculation and commonsense).
|
||||
P.S.1 each step of the plan should be a single action.
|
||||
P.S.2 ask the user if you don't have info.
|
||||
P.S.3 mark a completed step with done keyword.
|
||||
<|/s|>
|
||||
$conversation
|
||||
<|assistant|>
|
||||
@@ -557,6 +558,24 @@ function conversation(a::agentReflex, usermsg::String; attemptlimit::Int=3)
|
||||
return response
|
||||
end
|
||||
|
||||
""" Direct conversation is not an agent, messages does not pass through logic loop
|
||||
but goes directly to LLM.
|
||||
|
||||
"""
|
||||
function directconversation(a::agentReflex, usermsg::String)
|
||||
response = nothing
|
||||
|
||||
_ = addNewMessage(a, "user", usermsg)
|
||||
if isusetools # use tools before responseing
|
||||
workstate, response = work(a)
|
||||
end
|
||||
|
||||
response = chat_mistral_openorca(a)
|
||||
response = removeTrailingCharacters(response)
|
||||
_ = addNewMessage(a, "assistant", response)
|
||||
return response
|
||||
end
|
||||
|
||||
"""
|
||||
Continuously run llm functions except when llm is getting Answer: or chatbox.
|
||||
There are many work() depend on thinking mode.
|
||||
|
||||
Reference in New Issue
Block a user