Text-bison fails at handling JSON output

I am giving a series of prompts to the model in batch predictions and they are constantly failing with this cryptic error:

 

Batch prediction job BatchPredictionJob 2024-04-16 19:35:08.030593 encountered the following errors:

  • Failed to run inference job. Query error: Unsupported endpoint: The output data is not valid json. Original output: { "predictions": [[NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN] ] }.. at [39:1]

 

What in the world does this mean? I assume that because I am asking for JSON output, that it's somehow not dealing with the string correctly? Has anyone else dealt with this?

10 3 304
3 REPLIES 3

I'm facing exactly the same issue with the same model text-bison. I wonder if anything changed from a Google's perspective because the exact same script was working a few days ago.

The maddening part is that I'm also getting "INTERNAL ERROR" as an error explanation. But if I give the same prompt file to the model, sometimes it will work and other times it won't. 

 

I'm also being pretty deterministic about my choices:

 

Temp = 0.01, (I've also set it to 0. Doesn't seem to matter).

TopK = 1

This is basically as close to deterministic responses as you can get and yet it's stochastic success. The error logs say nothing, either. 

Are you by any chance using langchain? I'm using langchain and someone mentioned that the issue could originate from Vertex AI model not passing outputs as expected to langchain.