Findings:
16 Matching Annotations
- Oct 2025
-
github.com github.com
-
-
github.com github.com
-
The Likely Cause:
-
Root Cause Identified
-
-
github.com github.com
-
Root Cause Analysis
-
Successfully fixed the TypeError that occurred when the DataForSEO API returns an unexpected response structure where items could be None.
-
-
github.com github.com
-
'm running into same, we need to limit certain chunks I think. Should be able to change chunk size to fix, not sure if that'll fix the total token amount, but we can make the summaries smaller too.
-
-
github.com github.com
-
Here is how you can check if there is some bottleneck on your machine: Instead of running ./run.bat (or ./run.sh) you can run: python -m cProfile -o profile.pickle -m autogpt
-
-
github.com github.com
-
This is a prompting issue and a limitation of LLMs
-
-
github.com github.com
-
Make sure docker is installed and you have permission to run docker commands docker run hello-world
-
- Sep 2025
-
github.com github.com
-
I also think is that, I find that plugins_config (at line 178 Auto-GPT-stable/autogpt/plugins/init.py) is always empty, no matter if is configured "correctly", which is not clear how they need to properly be defined considering every plugin developer use a different name for the plugins (ones with dashes, others without, etc..) besides some are not yet implementing the template plugin class. but also in what I read say that this is not yet strictly necessary. They talk about a "as long as they are in the correct (NEW) format" but it is not clear either what this is.
-
-
github.com github.com
-
Whenever it has completed auto-tasks (I usually do tasks in blocks of 50 (y -50) to (y -200). You can type in a message to it instead of typing y -xx or n (to exit). It will say it doesn't understand but typically "fixes" itself and sometimes will accept what you've written.
-
Try using something different than the local memory. I downloaded the code 5 days ago so I don't know if it has been changed but inside the config.py file in the scripts folder on line 75, the memory backend is hardcoded to local. Change local to pinecone and use a pinecone API key if you want.
-
-
github.com github.com
-
Both libraries use signals as a identifier which leads to a namespace collision.
-
-
github.com github.com
-
OOM when allocating tensor with shape[50982027264] and type float on /job:localhost/replica:0/task:0/device:GPU:0 by allocator GPU_0_bfc [[{{node gradient_tape/UnsortedSegmentSum/pfor/UnsortedSegmentSum}}]]
-
It may be that the size is too large and the memory is overflowing.
-
- Aug 2025
-
github.com github.com
-
Hi, By default the colab notebook is using tensorflow v2.17 which contains keras3.0 which was causing the error. Could you please try to import keras2.0 with the below commands.
-