Chaining prompts (Integrating LLM response into a second constant prompt)

Hi!
Is there a way to chain prompts and responses in a single request?

I have a question + a constant follow-up question that I want to ask my model, but I don't want to give the two question in a single prompt (chain of thought) because the CONSTANT second question changes the response to the first question (by introducing additional undesired context).

Is there a way to chain the prompts and the answers in a single API call? I would like to avoid sending separated requests to the server (owing to latency).  

I am using Gemini-1.0-Pro using vertex API. interested only in the follow-up answer. 
This is the scheme: 

Question_1 --> Answer_1 
Qusetion_2 + Answer_1 --> Answer_2

I would like to send just the two questions, Is there a way to do it in a single URL request?

0 REPLIES 0