I also just ran into this issue after cloning from master a few hours ago; message_agent went over the limit once, after which subsequent calls also failed. Telling the system to delete and re-create the agent got it past the bottleneck. Maybe some way to restrict the history provided to sub-agents would work?
26 Matching Annotations
- Nov 2025
-
github.com github.com
-
-
Basically max is 8192 tokens in this context, lowering that will force it to split something less into chunks IE: def split_text(text: str, max_length: int = 4192) -> Generator[str, None, None]: basically would split anything above that I believe. It's linked into messages and other functs
-
- Oct 2025
-
github.com github.com
-
The solution where I installed tf_keras worked for that section, but I’m encountering a similar error in the "Attach a classification head" section of the same notebook. However, the previous solution does not seem to work in this case.
-
-
github.com github.com
-
Could you try updating MLflow to 2.20 or newer? The dictionary parameter type is supported since 2.20.
check version
-
I tried to apply the code you suggested but it's not working well:
-
You can pass different thread ID (params) at runtime. The one passed to log_model is just an input "example" for MLflow to determine the input signature (type) for the model.
Discussion
-
You can use params to pass the configurable object including thread ID.
suggested a new parameter
-
-
github.com github.com
-
Maybe try changing this line in autogpt/processing/text.py ? def split_text(text: str, max_length: int = 8192) -> Generator[str, None, None]: honestly, I'm still checking to see if that'd be it, but doubtful lol
-
-
github.com github.com
-
Hi @cheyennee , OOM errors are not problem with Tensorflow and with low number of filters it is indeed working. Oh Sorry,I missed this from your logs.
-
It appears that the number of filters does make a difference 😂. In your gist, where the number of filters is set to 40, there are no crashes. However, I repo above code in colab, in the provided gist, where the number of filters is increased to 1792, it crashes, and the error message suggests a potential OOM issue.
-
I have tested the given code on colab and its working fine.Please refer attached gist. Please note that I have reduced the no of filters due to Memory constraints but it should not affect the reported behaviour. Could you please verify the behaviour attached. Can you confirm whether the issue with Windows Package as it will download intel package?
-
-
github.com github.com
-
Make sure docker is installed and you have permission to run docker commands docker run hello-world
-
-
github.com github.com
-
UPDATE: Now it uses pinecone. I had previously typed PINECONE in upper case letters. That was the problem. Now the line in my .env file looks like this: MEMORY_BACKEND=pinecone It crashes on the first run because it takes some time for the Pinecone index to initialize. That´s normal. Still testing if it actually codes something now... UPDATE_2: Now it wants to install a code editor like pycharm :-) Never seen it try something like this. I´ve tried to give it human feedback: you dont need a code editor. just use the write_to_file function. Sadly that confused the ai.....React
-
-
github.com github.com
-
add a new Claude-based workflow for when dependabot opens a pr to have Claude review it. Base it on the claude.yml workflow and make sure to include the existing setup, just add a custom prompt. research the best way to do this with the claude github action and make it look up the change log for the dependobot for all the changed dependencies + check them for breaking changes + let us know if we're impacted
-
Correct Approach
-
-
github.com github.com
-
The agent blocks are missing their input/output pins because the input_schema and output_schema properties are not being populated in the GraphMeta objects when flows are loaded. When these are undefined, the CustomNode component falls back to empty schemas {}, resulting in no pins being rendered.
-
When rendered in CustomNode.tsx (lines 132-137), agent blocks replace their schema with the hardcoded values:
-
-
github.com github.com
-
The Fix Applied
-
-
github.com github.com
-
I just deleted the three lines in autogpt/promt.py. Maybe not the nicest solution, but works so far.
-
Could make do_nothing a lower variablity maybe?
-
-
github.com github.com
-
Maybe you can try again with: o EXECUTE_LOCAL_COMMANDS=false o RESTRICT_TO_WORKSPACE=true See if the file is written to the folder " auto_gpt_workspace folde
-
- Sep 2025
-
github.com github.com
-
Its the plugins that need updating.
-
-
github.com github.com
-
Try using something different than the local memory. I downloaded the code 5 days ago so I don't know if it has been changed but inside the config.py file in the scripts folder on line 75, the memory backend is hardcoded to local. Change local to pinecone and use a pinecone API key if you want.
-
- Aug 2025
-
github.com github.com
-
the gist notebook executed successfully however am still getting the error on this machine :
-
Could you try to modify the tf.keras to keras and execute the code. I have changed some steps like modifying tf_keras/keras.Sequential instead of tf.keras.Sequential and the code was executed without error/fail. Kindly find the gist of it here. Thank you!
-
Also I have changed some steps like modifying tf_keras/keras.Sequential instead of tf.keras.Sequential and the code was executed without error/fail. Kindly find the gist of it here.
-