I am trying to create custom GPT in GPT enterprise version for my company in which i am trying to implement this feature:
User request for analysis on documents from backend.
API Call is made to my backend python program hosted on private Azure.
Data fetched is used on the analysis. *Have done custom instruction on how to do analysis.
Issue is the context window for Custom GPT. I want to implement a instruction in a way that API calls are made in the backend with slight delay and information retrieved from documents are processed in batches. Here is how i am trying to do it via instruction:
- Complete Document Analysis across multiple Vendors:
- Retrieve vendor contracts in batches to stay within the token limit.
- Process each batch, analyze it according to the Analysis Type, store intermediate results in memory, and then remove the processed data.
- Introduce a delay of three seconds between each batch processing to manage token usage and ensure thorough analysis.
- Once all batches are processed, compile and provide a comprehensive summary of the analysis, including all alignment, discrepancies, and recommendations in a single combined table format.
Has anyone done something like this. Please dont suggest Assistant API as it requires setting up frontend/backend logics and is not cheap…
Questions:
- How to implement this lagging in the API schema ?
- Does Custom GPT clear its processing data based on instructions ?
Would appreciate meaningful inputs !