Building an AI recipe app

Looking up recipes online has become predictably soul-crushing. The top results are all search engine optimized blog posts with banner ads, popup ads, email signup screens, and a video with more embedded ads. We get it, the content is free so the site owners need to monetize it somehow. But it seems like in the past few years they have gone too far. You’re relieved if the blogger has generously added a “jump to recipe” button so that you can skip over the long background story (which was only included to improve the search engine ranking).

Jump to Recipe

So I built a recipe search app that uses AI to return only ingredients and instructions. No long story, no ads, just the recipe content. This app is a simple wrapper around the OpenAI API, but there is added functionality that allows users to “enhance” the recipe or create a “healthy” variation with one click. Individual ingredients can also be substituted by the AI or sent to the user’s grocery list in Trello.

You can watch a short video demo of the app here.

The Jump to Recipe site main page initially consists of just a simple input box and a “get recipe” button. If a user types in “fries” and hits enter, the site uses the the OpenAI API to send a completion request to the GPT-3.5 AI model. Since I’m going to use the response to then build out a webpage, I’ve indicated that I want the result to be returned as a JSON object:

const prompt = `return a recipe for ${userInput} formatted as a JSON object with the following keys: dish, ingredients, instructions`

openai.createChatCompletion({
model: "gpt-3.5-turbo",
messages: [{ role: "user", content: prompt }],
})

The model is pretty forgiving on how I indicate the formatting requirements; it accepts this pseudo-code method of syntactical specification. (UPDATE: OpenAI made an update to their API that allows you to received a formatted JSON response. I’ve updated the app and written about it here: https://josephm.dev/blog/20230616/ The rest of this blog post is specific to how the API worked prior to the function calling updates.) Without this additional schema modeling, the AI model would return a readable response, but the format could vary slightly between responses and I wouldn’t be able to build functions to reliably access the specific values that I need in order to display the data on a webpage. It’s just much easier to work with the response if I can push the AI model into responding with JSON. The prompt above returns the following text when I set the userInput value to “fries”:

{"dish": "fries",
"ingredients": [
"4-5 large russet potatoes",
"1/4 cup vegetable oil",
"1 teaspoon salt"
],
"instructions": [
"Preheat oven to 450 degrees F (230 degrees C).",
"Wash and peel potatoes, and cut into desired shape and size.",
"In a large bowl, toss potatoes with vegetable oil and salt until evenly coated.",
"Spread potatoes in a single layer on a baking sheet.",
"Bake for 20-25 minutes, flipping once halfway through, or until golden brown and crispy.",
"Remove from oven, let cool for a few minutes, and serve hot."
]
}

I can then set the recipe state variable by parsing the generated text and passing it to the setRecipe setter function. Since the OpenAI chat completion model allows me to specify the schema modeling but returns the result as a JSON string, JSON.parse() function is used to convert the AI response string into a JSON object:

.then((completion) => {
const generatedText =
completion.data.choices[0].message.content;
setRecipe(JSON.parse(generatedText));
})

Using the AI to Generate Healthy and Enhanced Variations

Once the recipe loads, I’ve added some additional buttons that a user can click to request the AI generate a recipe variation. Clicking the “make healthy” button sends a stringified version of the entire recipe state variable into a new prompt and requests the AI model to return a “healthier” version, again in a specified format:

const prompt = `Rewrite this recipe to be healthier:
${JSON.stringify(recipe)} Format response as:
{"dish": ${dishName}, "ingredients": [array of strings],
"instructions": [array of strings]}`

The schema modeling in this example is pretty casual, but the GPT-3.5 model does a good job of following up with a recipe variation that adheres to the JSON data structure:

{"dish":"oven-baked fries",
"ingredients":[
"4-5 medium sweet potatoes",
"1 tablespoon olive oil",
"1/2 teaspoon sea salt",
"1/4 teaspoon black pepper"],
"instructions":[
"Preheat oven to 400 degrees F (200 degrees C).",
"Wash and cut sweet potatoes into thin wedges.",
"In a large bowl, toss sweet potatoes with olive oil, salt, and black pepper until evenly coated.",
"Spread sweet potatoes in a single layer on a baking sheet lined with parchment paper.",
"Bake for 20-25 minutes, flipping once halfway through, or until tender and golden brown on the outside.",
"Remove from oven, let cool for a few minutes, and serve hot."]}

Clicking the “enhance” button sends a stringified version of the entire current recipe state variable into a new prompt and requests the AI model to return an “enhanced, more flavorful” version, again in a specified format. Clicking “enhance” on the “healthy” fries returns the following response:

{"dish": "Spicy Sweet Potato Fries",
"ingredients": [
"4-5 medium sweet potatoes",
"2 tablespoons olive oil",
"1/2 teaspoon sea salt",
"1/4 teaspoon black pepper",
"1/2 teaspoon smoked paprika",
"1/4 teaspoon cayenne pepper",
"1/4 teaspoon garlic powder"],
"instructions": [
"Preheat oven to 400 degrees F (200 degrees C).",
"Wash and cut sweet potatoes into thin wedges.",
"In a large bowl, whisk together olive oil, sea salt, black pepper, smoked paprika, cayenne pepper, and garlic powder.",
"Toss sweet potatoes with the spice mixture until evenly coated.",
"Spread sweet potatoes onto a baking sheet lined with parchment paper in a single layer.",
"Bake for 20-25 minutes, flipping once halfway through, or until tender and golden brown on the outside.",
"Remove from oven, let cool for a few minutes, and serve hot."] }

Iterative Prompt Development

During the testing period, I realized that my prompt was not specific enough to get consistent results. On another recipe, I clicked the enhance button and received the following response:

{"dish": "Enhanced Chicken Alfredo",
"ingredients": [
"8 oz fettuccine pasta",
"2 tbsp butter",
"2 cloves garlic, minced",
"1 lb boneless, skinless chicken breasts, cubed",
"1 cup heavy cream", "1/2 cup freshly grated Parmesan cheese",
"1/4 tsp red pepper flakes",
"1/4 cup chopped fresh basil",
"Salt and pepper to taste"],
"instructions": [
"Cook pasta according to package directions. Drain and set aside.",
"In a large saucepan, melt butter over medium heat. Add garlic and red pepper flakes and cook until fragrant.",
"Add chicken and cook until no longer pink.",
"Add heavy cream and Parmesan cheese and stir until cheese is melted.",
"Add cooked pasta to the sauce and toss to coat.",
"Stir in chopped basil and season with salt and pepper to taste. Serve immediately."]}
Enhancements:
- Added red pepper flakes for a bit of heat
- Added fresh basil for added flavor and freshness
- Specified to cook garlic and red pepper flakes together to infuse more flavor
- Changed dish name to reflect enhancements

In the response, the AI model provided a bullet point list of all the enhancements. This is helpful if you are just reading the recipe in a chat interface, but this additional text outside of the JSON object results in a breaking error when I try to parse the AI response:

SyntaxError:
JSON.parse: unexpected non-whitespace character after JSON data at line 3 column 1 of the JSON data

To handle this error, I updated the prompt to be more specific. I am also specifying that I want each instruction item to be numbered:

const prompt = `Enhance this recipe to make it more interesting and flavorful:
${JSON.stringify(recipe)} Format response as:
{"dish": ${userInput}, "ingredients": ["", "", ...],
"instructions": ["1. ...", "2. ...", ... ]}
Do not include anything outside of the curly braces.`;

Calling this “prompt engineering” is a bit of a stretch, but at the moment this iterative prompt development is a necessary step in the development of programs that parse the response text to render components on a webpage.

Still, there is a lot of room for optimizing the prompt for performance and cost efficiency. In my current program, I am tossing the entire recipe object back into the same AI model when asking for a simple ingredient substitution. I could experiment further to see if other models are more cost effective or efficient for specific needs throughout the application.

Send Ingredient to Grocery List

In order to push this project a little bit past being just a wrapper around the OpenAI API, I implemented a feature that allows users to send ingredients to their Trello grocery list. Once a recipe is generated, the user can toggle a checkbox next to each ingredient and click “send to Trello” at the bottom of the ingredients list. A Trello card is created for each of the selected ingredients and then all of these ingredients are sent to the user’s grocery shopping list on Trello:

const createTrelloCard = async (item) => {
    const url = `https://api.trello.com/1/cards?key=${apiKey}&token=${apiToken}&idList=${listId}&name=${encodeURIComponent(item)}`;

    const response = await fetch(url, {
      method: 'POST',
    });

    if (response.ok) {
      console.log(`Created card for "${item}"`);
    } else {
      console.error(`Failed to create card for "${item}"`);
    }
  };

  const createCardsOnTrelloBoard = async () => {
    for (const item of items) {
      await createTrelloCard(item);
    }
  };

  createCardsOnTrelloBoard();
}

Best Recipe App?

Simply looking up recipes with ChatGPT is already fairly straightforward and seamless. The response time is almost immediate, the cost is zero, and any revisions can be done through simple follow-up commands. To that end, Jump to Recipe is perhaps only a slight improvement over just using ChatGPT. Like many of the AI products that have surfaced in the last few months, this is simply a “wrapper” with some added functionality like saving a recipe and integrating with another app (in this case, sending specific ingredients to my personal grocery list on Trello).

The major downsides with Jump to Recipe are the response times and the cost. My experience is that it took an average of 15 seconds for OpenAI to respond to API requests. ChatGPT provides the response immediately, the user just needs to sit through the word by word loading. Looking up the recipe on ChatGPT is free, for now at least. Hitting the API with these requests and revisions, however, does have a cost. These are the token costs for an apple pie query:

Getting initial recipe: 271 tokens
Enhancing recipe: 587 tokens
Substituting ingredient: 615 tokens
Make healthy variation: 588 tokens
Total: 2,061 tokens

OpenAI API pricing: $0.002/1k tokens
Total cost: $0.00412

The major improvements here are the single click revisions and the Trello functionality. I think this is the key with wrapper apps: you need to bring in another API if possible and significantly vary the user experience in a way that allows them to interface with the AI chat beyond the chat format. In this case, the user only needs to type the recipe name. All further interaction consists of simply selecting and clicking. The prompts have been optimized to run off of these click events and the AI seems to just rewrite the relevant components on the page.

Response time is a bit of an issue at the moment and the app is not totally free to operate, but I do love the simple interface the the ability to revise the recipes on the fly and then send the ingredients to my shopping list. At least I’m no longer skimming through those long stories when I look up a recipe.