본문 바로가기

Develop

Serve tensorflow iris models

Export a model

    # Feature columns describe how to use the input.
    my_feature_columns = []
    for key in train_x.keys():
        my_feature_columns.append(tf.feature_column.numeric_column(key=key))

    feature_spec = tf.feature_column.make_parse_example_spec(my_feature_columns)
    serving_input_receiver_fn = tf.estimator.export.build_parsing_serving_input_receiver_fn(feature_spec)
    export_dir = classifier.export_savedmodel('export', serving_input_receiver_fn)

or

def serving_input_receiver_fn():
  max_seq_length = FLAGS.max_seq_length
  batch_size = 1
  feature_spec = {
  "unique_ids": tf.FixedLenFeature([], tf.int64),
  "input_ids": tf.FixedLenFeature([max_seq_length], tf.int64),
  "input_mask": tf.FixedLenFeature([max_seq_length], tf.int64),
  "segment_ids": tf.FixedLenFeature([max_seq_length], tf.int64),
        }

  serialized_tf_example = tf.placeholder(dtype=tf.string,
                                         shape=[batch_size],
                                         name='input_example_tensor')
  receiver_tensors = {'examples': serialized_tf_example}
  features = tf.parse_example(serialized_tf_example, feature_spec)
  return tf.estimator.export.ServingInputReceiver(features, receiver_tensors)

  ..
  ..

  estimator.export_saved_model(
        export_dir_base=EXPORT_PATH,
        serving_input_receiver_fn=serving_input_receiver_fn)

saved_model_cli 를 이용해 모델의 입/출력을 확인하자.

$ saved_model_cli show --dir export/1568007663 --tag_set serve  --signature_def serving_default

The given SavedModel SignatureDef contains the following input(s):
  inputs['inputs'] tensor_info:
      dtype: DT_STRING
      shape: (-1)
      name: input_example_tensor:0
The given SavedModel SignatureDef contains the following output(s):
  outputs['classes'] tensor_info:
      dtype: DT_STRING
      shape: (-1, 3)
      name: dnn/head/Tile:0
  outputs['scores'] tensor_info:
      dtype: DT_FLOAT
      shape: (-1, 3)
      name: dnn/head/predictions/probabilities:0
Method name is: tensorflow/serving/classify

How to run it as a server

$ tensorflow_model_server --rest_api_port=8501 --model_name=iris --model_base_path=/home/hm_home/work/anything/ml/iris/export
  • 8500: gRPC
  • 8501: rest api

How to test it as a client

/tmp/temp.json

[{"SepalLength":[5.1],"SepalWidth":[3.3],"PetalLength":[1.7],"PetalWidth":[0.5]}]
$ curl -X POST http://localhost:8501/v1/models/iris:classify  -H "Content-Type: application/json" -d @/tmp/temp.json
반응형