The issue has been resolved. Installing tf-keras and using keras from it instead of tf.keras fixed the problem. Thank you!
- Last 7 days
-
github.com github.com
-
-
the gist notebook executed successfully however am still getting the error on this machine :
-
Could you try to modify the tf.keras to keras and execute the code. I have changed some steps like modifying tf_keras/keras.Sequential instead of tf.keras.Sequential and the code was executed without error/fail. Kindly find the gist of it here. Thank you!
-
The solution where I installed tf_keras worked for that section, but I’m encountering a similar error in the "Attach a classification head" section of the same notebook. However, the previous solution does not seem to work in this case.
-
Also I have changed some steps like modifying tf_keras/keras.Sequential instead of tf.keras.Sequential and the code was executed without error/fail. Kindly find the gist of it here.
-
Hi, By default the colab notebook is using tensorflow v2.17 which contains keras3.0 which was causing the error. Could you please try to import keras2.0 with the below commands.
-
added
-
ValueError: Only instances of `keras.Layer` can be added to a Sequential model. Received: <tensorflow_hub.keras_layer.KerasLayer object at 0x7d7e43bbeb10> (of type <class 'tensorflow_hub.keras_layer.KerasLayer'>)
-
import tensorflow as tf import tensorflow_hub as hub mobilenet_v2 = "https://tfhub.dev/google/tf2-preview/mobilenet_v2/classification/4" inception_v3 = "https://tfhub.dev/google/imagenet/inception_v3/classification/5" classifier_model = mobilenet_v2 # @param ["mobilenet_v2", "inception_v3"] {type:"raw"} IMAGE_SHAPE = (224, 224) classifier = tf.keras.Sequential([ hub.KerasLayer(classifier_model, input_shape=IMAGE_SHAPE + (3,)) ]) link to notebook: "https://colab.research.google.com/github/tensorflow/docs/blob/master/site/en/tutorials/images/transfer_learning_with_hub.ipynb"
-
When attempting to add hub.KerasLayer to a tf.keras.Sequential model, TensorFlow raises a ValueError, stating that only instances of keras.Layer can be added. However, hub.KerasLayer is a subclass of keras.Layer, so this behavior seems unexpected. I expected hub.KerasLayer to be accepted as a valid layer in the tf.keras.Sequential model, as per the TensorFlow documentation.
-
- Jun 2025
-
github.com github.com
-
[XLA:GPU] Check whether the propagated tile offsets can be used. …
Commit message describes the model interpretability checking in details
-
"nvcc --compiler-bindir /path/to/clang" sets __clang__ while compiling CUDA code. This causes gpu_device_functions.h to think it is being compiled with Clang and try to use a Clang-specific function.
-
-
github.com github.com
-
sklearn 1.6 installed with conda (above) sklearn 1.3 installed with conda sklearn 1.6 installed with pip
Cross Version Validation
-
-
github.com github.com
-
return
Fixed some miscalleneous issues after reviewing
-
-
-
ONNX 1.17.1 prints:
Cross validation
Tags
Annotators
URL
-
-
github.com github.com
-
This PR fixes a C compatibility issue in the TfLiteQuantizationType enum definition. The current definition uses C++ syntax (enum : int) which causes compilation errors when included in C projects. This PR adds conditional compilation directives to use C++ syntax only when compiling with C++.
-
-
github.com github.com
-
But I think it was a lot of gdb backtrace that leads me to the file.
-
Whenever FreezeSavedModel function is called in the code, tensorflow::ClientSession cannot execute properly. Things work absolutely fine if FreezeSavedModel function is commented out.
-
-
github.com github.com
-
. TF version: 1.13.1 &, bazel is 0.19.2. Platform:
-
. It seems that it is getting confused with the double quotes .
-
Facing the same issue here. My Os is Ubuntu 18.04.
-
-
github.com github.com
-
TOCO applies a set of optimizations to reduce the model size, improve inference speed, and ensure compatibility with the target platform.
-
we are converting them to reshapes so that we can use standard reshape optimization transforms"
-
The behavior you mentioned, where tf.squeeze() is converted to a reshape operator when using the TOCO (Tens
-
-
github.com github.com
-
Please use pip<24.1 if you need to use this version.
-
absl-py (<0.11>=0.10.0)
-
Ignoring version 0.3.1.dev202105110329 of tflite-model-maker-nightly since it has invalid metadata:
-
I'm able to replicate the same behavior from my end
-
-
github.com github.com
-
Are you satisfied with the resolution of your issue?
-
Standalone code to reproduce the issue
-
-
github.com github.com
-
an m1 mac running big sur, homebrew python 3.8.12, pip version 21.3, using the tensorflow_macos virtual environment
-
Taking @alfaro96's comment as a hint, I tried another pip install but with Python 3.8, and it worked. Hope this helps other people.
-
No module named pip
-
pip install --pre --extra-index https://pypi.anaconda.org/scipy-wheels-nightly/simple scikit-learn " this works too if you are on 3.9
-
We have not released a version supporting Python 3.9 yet in PyPi.
-
-
github.com github.com
-
https://reviews.llvm.org/D81045 should help here.
-
That's right - that would be the minimal form - the copy has nothing to do with this issue. Not a high priority issue, but would be good to fail verification in such cases.
validated
-
So the issue is with the custom terminator which isn't strict about the possible parent operations? Then the minimal example is something like:
-
To reproduce: $ bazel-bin/tensorflow/compiler/mlir/tf-opt verify.mlir.
-
-
github.com github.com
-
but indeed we also can stop building/testing our python 3.13 wheels with numpy nightly, so updating that in #59819
config updated
-
Note that numpy 2.1.1 has Python 3.13 wheels and still has np._get_promotion_state so one option would be to switch to released numpy.
-
BUG: Remove np._get_promotion_state usage #59818
-
lestevementioned this on Sep 16, 2024⚠️ CI failed on Wheel builder (last failure: Sep 16, 2024) ⚠️ scikit-learn/scikit-learn#29852
-
AttributeError: module 'numpy' has no attribute '_get_promotion_state'
-
-
github.com github.com
-
#undef signals
-
Using MSVC 2022, but same error occurs while using LLVM/MSVC2019 compiler with my C++ application (in Qt 6.5.0)
-
Both libraries use signals as a identifier which leads to a namespace collision.
-
Standalone code to reproduce the issue
-
-
github.com github.com
-
This guild details the migration from Estimator to Keras APIs https://www.tensorflow.org/guide/migrate/migrating_estimator
Migration guidance and deprecation info shared
-
Faced the same issue with tensorflow 2.3.0 and tensorflow-hub 0.10.0. I just solved it by uninstalling tensorflow-estimator, tensorflow-hub and tensorflow, then installed tensorflow and tensorflow-hub. Now it's working :-)
-
reinstall tensor flow and tensor flow hub and it worked :-)
-
So they may change between versions. My guess is that for me it was an installation issue
-
tf.estimator.Estimator(model_fn)
-
-
github.com github.com
-
This was released as part of modelstore==0.0.75
-
! I can confirm that it works with the latest main
-
: I try exists(), if that fails with a ValueError, I try to list_blobs();
-
It is triggered when bucket.exists() is called,
-
I've managed to reproduce this error without modelstore.
-
-
github.com github.com
-
Hi @cheyennee , OOM errors are not problem with Tensorflow and with low number of filters it is indeed working. Oh Sorry,I missed this from your logs.
-
I'll reduce the no of filters. Thank you!
-
OOM when allocating tensor with shape[50982027264] and type float on /job:localhost/replica:0/task:0/device:GPU:0 by allocator GPU_0_bfc [[{{node gradient_tape/UnsortedSegmentSum/pfor/UnsortedSegmentSum}}]]
-
I have tested the given code on colab and its working fine.Please refer attached gist.
validate the code
-
TF2.14For issues related to Tensorflow 2.14.xFor issues related to Tensorflow 2.14.xcomp:keras
-
Conv3DTranspose_class = tf.keras.layers.Conv3DTranspose(filters, kernel_size, strides=strides, padding=padding, output_padding=output_padding, data_format=data_format, dilation_rate=dilation_rate, activation=activation, use_bias=use_bias, kernel_initializer=kernel_initializer, bias_initializer=bias_initializer, kernel_regularizer=kernel_regularizer, bias_regularizer=bias_regularizer, activity_regularizer=activity_regularizer, kernel_constraint=kernel_constraint, bias_constraint=bias_constraint) layer = Conv3DTranspose_class inputs = __input___0 with tf.GradientTape() as g:
-
-
github.com github.com
-
The Tensorflow team is constantly improving the framework by fixing bugs and adding new features. We suggest you try the latest TensorFlow version with the latest compatible hardware configuration which could potentially resolve the issue. If you are still facing the issue, please create a new GitHub issue with your latest findings, with all the debugging information which could help us investigate.
-
we require input indices to be unique. Otherwise, not only is the output non-deterministic on GPU, but gradients are broken on any device.
-
I was able to reproduce the issue on tensorflow v2.8, v2.9 and nightly. Kindly find the gist of it here.
-
-
github.com github.com
-
serena-ruan deleted the fix_lc_autolog branch
-
fix list …
-
add test … 439cbd4
test case added
-
config["callbacks"] = [*callbacks, mlflow_callback]
-
-
github.com github.com
-
edited by ekdnamEditsIssue body actions
Edited the post to revise
-
so, some weights in new model have different shape compared to the old model.
-
Could you please submit a minimal code snippet for reproduction of issue. Also the dataset modified_train.txt is missing.
-
- May 2025
-
github.com github.com
-
keras_core subsequently will release as Keras3 and tf.keras module will become legacy code. Thanks!
dead code removed
-
I have checked the code with keras_core which is now a multi backend support library. In keras_core there is no reported behaviour. Please refer to attached gist.
-
loop in reconstruct_from_config()
-
I was able to replicate this issue on colab, please find the gist here. Thank you!
-
-
github.com github.com
-
The area which corresponds to the difference in the lite versions are actually different in the h5 files. Furthermore, the dimensionality of some elements have changed from the start to the red arrow and downwards. Can either of you try again when the model architecture is identical prior to conversion?
-
I found that the branch of "Quantize "node for concat OP quantization is different (as shown in the figure below).letf is ok, and right is bad.
-
t is not clear how you are getting different results on quantization perhaps you can explain more. Are you observing incorrect results with tf.compat.v1.lite.TFLiteConverter.from_keras_model_file and correct results with tf.lite.TFLiteConverter.from_keras_model ?
-
-
github.com github.com
-
copybara-service bot merged commit 749de42 into master Feb 12, 2025 2 checks passed
-
eleted the exported_pr_715840018 branch
-
force-pushed the exported_pr_715840018 branch 19 times, most recently from 7ed12b8 to 4c7bc36 Compare February 11, 2025 03:16
-
-
github.com github.com
-
It worked. Thank you very much.
-
tf.app.run is deprecated in TF 2.x, please use TF tf.compat.v1.app.run for TF 2.12.
-
-
github.com github.com
-
B-Step62 merged commit 4eba574 into mlflow:master Feb 3, 2025 44 of 47 checks passed
-
Hence, this PR removes the system prompt from input example.
-
s a result, when users build ChatModel / ChatAgent with those providers, they cannot log the model with a confusing error: with mlflow.start_run():
-
-
github.com github.com
-
1 check passed
1 check passed
-
deleted
-
Update DetermineArgumentLayoutsFromCompileOptions to not overwrite pa… …
-
Update DetermineArgumentLayoutsFromCompileOptions to not overwrite parameter & result memory spaces.
-
-
github.com github.com
-
deleted
deleted a branch
-
Mark tracing APIs as experimental …
-
Mark all user-facing tracing APIs as experimental.
-
-
github.com github.com
-
Reaching basic code quality prevents large merge conflicts, and allows for testing so PRs don’t break functionality. It also allows for a speedy pr review process. Until then, I’ll be maintaining an active fork with best practices for python development. Feel free to fork this code to make the idea work in your repo.
-
CI is red
-
I’ll start actively maintaining it after some (any) pr making scripts a module is merged. That’s a large part of the diff, and continuing to contribute to the repo is just not worth the effort without that change.
-
Now the system doesn't need to request the users response
-
The main client currently connects to the rabbitmq database but occasionally connection is being dropped. I'm having trouble to get the AI to use the commands in order for me to debug them. The qa client has been rewritten using Rich to provide a very nice UI for question answering, but I haven't tested it yet. I'm a bit bogged down at work. Still hope to work on it and be done this week, at least this weekend
-
Renamed scripts to autogpt. Absolute Imports. isort. Fixed flake8 F40… …
added boilerplate code
-
-
-
I thought I could use it for free. If that works on your side, that's good.
-
Hi @Gumichocopengin8 Did you brought any credits for your OPEN AI API? The above error happens when you dont have any credits in your api.
Identifed the issue
-
I did it too as well as OPENAI_API_KEY=xxx python code.py, but I don't think it matters. both didn't work
Implemented the proposed fix
-
Thanks for the report, and thanks @joelrobin18 for the PR. @Gumichocopengin8, does changing openai.ChatCompletion to openai.chat.completions fix the code sample for you?
check other version of openAI
-
-
github.com github.com
-
Thanks for your help. I could register the graph with MLflow 2.20. Please add this guidance to the documentation as well.
Version update
-
Could you try updating MLflow to 2.20 or newer? The dictionary parameter type is supported since 2.20.
check version
-
You can pass different thread ID (params) at runtime. The one passed to log_model is just an input "example" for MLflow to determine the input signature (type) for the model.
Discussion
-
You can use params to pass the configurable object including thread ID.
suggested a new parameter
-
-
github.com github.com
-
Copy-and-paste code to reproduce this:
Reproduced the bug
-
Okay, I found out that I need to specify the parameter for subsample to be something less than 1 to get the score.
Parameter not specified
-
-
github.com github.com
-
adrinjalali approved these changes
approved
-
I left a few comments, please fix the rest of the PR accordingly.
reviewed the code
-
Unit tests created to verify correct warning messages are raised upon usage of n_alphas parameter. Unit tests created to verify correct warning message if using the default value of alphas. All unit tests passing and all warnings suppressed on existing test cases using filter warnings.
Unit test added
-
Updated LinearModelCV and derived classes LassoCV, ElasticNetCV, MultiTaskElasticNetCV, MultiTaskLassoCV to remove n_alphas parameter from the constructor. alphas parameter is updated to support an integer or array-like argument. Functionality of n_alphas is preserved by passing an integer to alphas. Parameter_constraints updated accordingly.
changed the derived class parameters
-
-
-
requested a review from a team as a code owner
asked for code review
-
Codecov Report
verified the test case result
Tags
Annotators
URL
-
-
github.com github.com
-
Onnx Build Logs Iteration-1.txtcyyever commented on Mar 21, 2025 cyyeveron Mar 21, 2025 · edited by cyyeverEditsContributor@vijayaramaraju-kalidindi The situation is tricky in your case because the protobuf libraries are all static libraries. I need to know your Linux distribution.
Reviewing the solution
-
A quick workaround is to apt-get remove protobuf packages and build onnx from https://github.com/cyyever/onnx/tree/protobuf6 . This branch of onnx will download and compile protobuf 6.30.1 as dependency automatically, which matches the python's protobuf .
Removing the dependency
-
We must detect such cases and link to protobuf::libprotobuf, which should be a shared library.
Proposed a new branch
-
Thanks this worked. Successfully built ONNX.
Verified the Fix
-
The PR has been merged, you could try the main
Merged the PR after Successful retest
-
This error originates from a subprocess, and is likely not a problem with pip.
Error Root originated
-
-
github.com github.com
-
copybara-servicementioned this on Apr 15, 2025PR #91416: Fix C compatibility issue in TfLiteQuantizationType enum google-ai-edge/LiteRT#1736
Fixed C compatibility issue
-
Fix C compatibility issue in TfLiteQuantizationType enum #91416
Implemented the fix
-
No problems using TensorFlow Lite 2.18. This commit 977257e caused issue.
Root Cause Analysis
-
I closing this issue because this PR #91416 got merged, if you face any issue please feel free to post your comments if required I'll open this issue.
Merging and Closing the Issue
-
-
github.com github.com
-
google-ml-butlerassigned tilakrayal
Assigned to a user
-
Relevant log output
Found the problem in gpu_device_functions.h
-
Standalone code to reproduce the issue
Tried to reproduce the standalone Code
-
ploprestiadded a commit that references this issue
Commit to Fix the issue
-
-
github.com github.com
-
Fixes API Deprecate n_alphas in LinearModelCV #30616
Fixed the issue
-
adrinjalaliclosed this as completedin #30616
Approved & Merged
-
-
-
justinchubyclosed this as completed
Closed the issue
-
Could you test with the latest onnx-weekly package (you can install it from pip)? It may have been fixed
Experimental Build Tested
-
The contents of op_run.to_array_extended(t) and in 1.17.1 that of onnx.numpy_helper.to_array(t) as well may vary because it is actually an uninitialized array.
Root Cause Identified
-
-
github.com github.com
-
Add links to examples from the docstrings and user guides #26927
Improve the docs
-