How to debug "INVALID_ARGUMENT: input tensor alias not found in signature"

I am trying to access my tensorflow serving model which has a signature as can be seen from this code segment: regression_signature = predict_signature_def( inputs={"phase_features" :...

How to convert data to a serialized tf.Example(tensorflow tfrecords) using Golang

Is there a Golang API to convert data to a serialized tf.Example, also known as tensorflow tfrecords. I can find a Python api tf.python_io.TFRecordWriter() to achieve this. But examples with...

Prediction failed: contents must be scalar

I have successfully trained, exported and uploaded my 'retrained_graph.pb' to ML Engine. My export script is as follows: import tensorflow as tf from tensorflow.python.saved_model import...

How to read a utf-8 encoded binary string in tensorflow?

I am trying to convert an encoded byte string back into the original array in the tensorflow graph (using tensorflow operations) in order to make a prediction in a tensorflow model. The array to...

Define instance key (index number) for Cloud machine learning prediction

I followed the 'Getting Started' tutorial for Cloud Machine Learning Engine and deployed it. I can pass an input file containing JSON instances to Batch Prediction service and it returns a file...

tensorflow_model_server Access-Control-Allow-Origin

I would like to set up a TensorFlow Serving endpoint which can be accessed like an API from a different origin domain. I have exported my model successfully, and I can get predictions via POST...

How do I export a graph to Tensorflow Serving so that the input is b64?

I have a Keras graph with a float32 tensor of shape (?, 224, 224, 3) that I want to export to Tensorflow Serving, in order to make predictions with RESTful. Problem is that I cannot input tensors,...

TensorFlow Serving: Update model_config (add additional models) at runtime

I'm busy configuring a TensorFlow Serving client that asks a TensorFlow Serving server to produce predictions on a given input image, for a given model. If the model being requested has not yet...

How to a make a model ready for TensorFlow Serving REST interface with a base64 encoded image?

My understanding is that I should be able to grab a TensorFlow model from Google's AI Hub, deploy it to TensorFlow Serving and use it to make predictions by POSTing images via REST requests using...

TensorFlow Serving Error - Could not find meta graph def matching supplied tags: { serve }

I am trying to restore a TensorFlow's Saver object (.ckpt.*) and convert it into SavedModel object(.pb) so that I can deploy it with TensorFlow Serving. This is how I convert: with...

TensorFlow Serving REST API - JSON Parse Error

I've frozen and exported a SavedModel, which takes as input a batch of videos that has the following format according to saved_model_cli: The given SavedModel SignatureDef contains the following...

keras error when trying to get intermediate layer output: Could not create cudnn handle

I am building a model using keras. I am using: anaconda (python 3.7) tensorflow-gpu (2.1) keras (2.3.1) cuda (10.1.2) cudnn (7.6.5) nvidia driver (445.7) nvidia gpu: gtx 1660Ti (6GB) when I am...

Use tensorboard with object detection API in sagemaker

with this I successfully created a training job on sagemaker using the Tensorflow Object Detection API in a docker container. Now I'd like to monitor the training job using sagemaker, but cannot...

TensorFlow C API Logging Setting

I am trying to suppress the logging of the tensorflow in C-API when it loads a saved model. The logging looks like this 2020-07-24 13:06:39.805191: I tensorflow/cc/saved_model/reader.cc:31]...

How to pass multiple 2d array as input to tensorflow serving api?

I have a tensorflow model which takes two 2d arrays as input. This is how I trained the model. x.shape == (100, 250) y.shape == (100, 10) model.fit([x,y], y_train) Now I'm using tensorflow...

Incompatible shapes: [11,768] vs. [1,5,768] - Inference in production with a huggingface saved model

I have saved a pre-trained version of distilbert, distilbert-base-uncased-finetuned-sst-2-english, from huggingface models, and i am attempting to serve it via Tensorflow Serve and make...

I am training a model on GCP with "gcloud ai-platform jobs submit training" and it does not export a savedmodel.pb

I am using the code below to train a model on GCP locally. And it return an exported savedmodel.pb %%bash # Use Cloud Machine Learning Engine to train the model in local file system gcloud...

Batch prediction using a trained Object Detection APIs model and TF 2

I successfully trained a model using Object Detection APIs for TF 2 on TPUs which is saved as a .pb (SavedModel format). I then load it back using tf.saved_model.load and it works fine when...

Deploy Flask and Tensorflow serving on Google App Engine

I want to deploy Flask API with gunicorn and tensorflow serving to Google App Engine (Flex). I wrote a Dockerfile and startup.sh but fails to deploy. I increased memory to 6GB and set timeout 2...

Why would this TensorFlow Serving gRPC call hang?

We have a fairly complicated system that stitches together different data sources to make product recommendations for our users. Among the components is often a call out to one or more TensorFlow...

Very slow performance on extracting tensorflow-serving grpc request results using .float_val

For some reason the time used to extract results using .float_val is extremely high. Scenario example along with its output: t2 = time.time() options = [('grpc.max_receive_message_length', 100 *...

How to INCLUDE certain pre-processing step into model for Tensorflow serving

I have built a model with different features. For the preprocessing I have used mainly feature_columns. For instance, for bucketizing GEO information or for embedding categorical data with a large...

Tensorflow-serving docker container adds the GPU device but GPU has 0% utilization

Hi, I'm having issues with dockerized TF Serving seeing but not using my GPU. It adds the GPU as device 0, allocates memory on it, but then loads the ML model into CPU device memory and runs...

GPU utilization is zero when running batch transform in Amazon SageMaker

I want to run a batch transform job on AWS SageMaker. I have an image classification model which I have trained on a local GPU. Now I want to deploy it on AWS SageMaker and make predictions using...

RaggedTensor request to TensorFlow serving fails

I've created a TensorFlow model that uses RaggedTensors. Model works fine and when calling model.predict and I get the expected results. input = tf.ragged.constant([[[-0.9984272718429565,...

TensorFlow Serving: Sending dictionary of multiple inputs to TFServing Model using the REST Api?

I'm serving a BERT model using TFServing and want to extract the hidden layers using the REST API. When using the model in Google Colab I can run inference just fine using: inputs = { ...

TensorFlow Extended data_accessor.tf_dataset_factory() shape discrepancies

I am facing a perplexing issue while attempting to convert a vanilla tensorflow/keras workflow into a tensorflow extended pipeline. In short: the datasets generated using tfx’s ExampleGen...

Get second last value in each row of dataframe, R

I am trying to get the second last value in each row of a data frame, meaning the first job a person has had. (Job1_latest is the most recent job and people had a different number of jobs in the...

Keras 3D CNN gets stuck seemingly randomly during training on colab

I am new to using keras, colab and deep learning in general so I apologize for any mistakes. I'm trying to train a 3D U-net model on the BraTS dataset using keras on Google Colab. The model keeps...

Sagemaker deploy model with inference code and requirements

I trained a TensorFlow model and now I would like to deploy it. The data needs to be processed thus I have to specify one inference.py script and one requirements.txt file. When I deploy the...