Integration of Azure OpenAI in Angular application

Sannidhi Siva
6 min readFeb 26, 2024

--

Azure OpenAI Service is a platform that offers access to a variety of robust language models created by OpenAI. Here’s a simplified explanation of these models:

GPT-4: OpenAI’s flagship model, known for its safety and utility. It excels in solving complex problems due to its extensive general knowledge and problem-solving skills.

GPT-4 Turbo with Vision: A comprehensive multimodal model by OpenAI that can interpret images and respond to queries about them. It combines natural language processing and visual comprehension.

GPT-3.5-Turbo: A member of the GPT-3.5 series, this model is both highly capable and cost-effective. It’s primarily designed for chat applications but is flexible enough for a variety of tasks.

Embeddings model series: These are continuous vector representations of words or tokens that encapsulate their semantic meanings in a high-dimensional space. They enable the model to process discrete tokens in a format suitable for the neural network.

Users can utilize these services via REST APIs, Python SDK, or the Azure OpenAI Studio’s web-based interface. The Azure OpenAI Studio offers an interactive platform for deploying and managing these models.

Ref:What is Azure OpenAI Service? — Azure AI services | Microsoft Learn

In this article, we will explore how to integrate Azure OpenAI Services by using a scenario that involves creating a child-friendly story and corresponding images.

Story Generator

In this project setup, we will have the following steps.

  1. Setup angular environment
  2. Setup for Azure OpenAI
  3. Setup for Azure OpenAI client library
  4. Setup for Completion Service to generate story from User Intent
  5. Setup for Dall-E to generate friendly images

Setup angular environment

To begin development, first create a suitable environment by installing Node.js and the Angular CLI, essential for backend and frontend development. Download Node.js from its official website, install it, and then use the terminal to install the Angular CLI.

Node.js (nodejs.org)

npm install -g @angular/cli

As a next step create angular component

ng generate component story-book-component

Within the new component, we will develop the storybook interface utilizing HTML and CSS.

In case of any queries, you can start with this playground portal to understand the structure

Home • Angular

Setup for Azure OpenAI

Navigate to portal.azure.com and create Azure OpenAI resource.

Please copy Keys and Endpoint which will be used in next steps.

Reference:Quickstart — Get started using GPT-35-Turbo and GPT-4 with Azure OpenAI Service — Azure OpenAI Service | Microsoft Learn

Setup for Azure OpenAI client library

The JavaScript client library for Azure OpenAI is designed to offer a seamless and natural interface, closely integrated with the broader Azure SDK ecosystem.

It enables connections to both Azure OpenAI resources and the general OpenAI inference endpoint, thereby serving as an excellent option for development with OpenAI, even outside of Azure environments.

npm i @azure/openai
import { OpenAIClient, AzureKeyCredential } from '@azure/openai'

Setup for Completion Service to generate story from User Intent

As discussed in this article, we will discuss story book generator with the help of Azure OpenAI.

We will create components where users will provide input for story concept, and we will provide user intent for completion service to generate kids friendly story.

Demo Visual

As a next step we must create well-guided prompt for chat completion service to create children friendly story with given user intent.

//Base Setup for Constants
export const endpoint = "";
export const azureApiKey = "";
export const deploymentId = "";
export const baseImageUrl ="";
export const storyPrompt="Imagine you're a storyteller, sitting in a circle with a group of eager kids under 6 years old, their eyes wide with anticipation. Your task is to tell them a story, but in a special way. You'll take them on a journey through a magical or adventurous tale, filled with colorful characters, exciting twists, and a heartwarming lesson at the end. After you've captured their imagination with the story, you'll gently guide them back to the real world by summarizing the entire adventure in 3-5 short, simple paragraphs. These paragraphs should be easy for little ears to understand, using words that are friendly and comforting. Think about describing the main characters in a way that makes them smile, the problem in the story as a puzzle that gets solved, and the ending as a cozy blanket wrapping up the tale. Remember, your summary isn't just retelling the story; it's like drawing a beautiful picture with words that leave sparkles in their minds and warmth in their hearts"
const storyChat: any[] = [
{ role: "system", content: `{storyPrompt} {this.userStoryIntent}`}
];
const client = new OpenAIClient(endpoint, new AzureKeyCredential(azureApiKey));
const deploymentId = 'YOUR_AZURE_DEPLOYMENT_NAME';
const result = await client.getChatCompletions(deploymentId, storyChat);

in next step we will generate title or summary of generated story to provide instructions for DALL-E to create image.

export const visualInsturctionsPrompt="Craft a captivating, child-friendly title for a canvas picture, inspired by a story, using no more than 50 characters. Imagine you're sprinkling a pinch of magic onto a canvas, transforming a whole adventure into a few words that sparkle with fun and imagination. This title should be a cozy, colorful blanket of words, inviting young minds to dream and wonder. Think of it as naming a treasure map that leads to a chest full of stories, where each word is a jewel, and the fewer you use, the brighter they shine. Your goal is to capture the essence of the story in a playful, enchanting way that would make any child eager to dive into the tale behind the picture."
// Initialize variable to store a selected story line, set to undefined initially.
let selectedStoryLine = undefined;

// Iterate over each choice in the result to find a story line.
for (const choice of result.choices) {
// Assign the message content of the choice to selectedStoryLine, if available.
selectedStoryLine = choice.message?.content;
}

// Check if a story line was not found (remains undefined).
if (selectedStoryLine == undefined) {
// Exit the function as there's no story line to process.
return;
}

// Prepare the prompt for visual instructions with the selected story line.
const visualInstructionsPrompt = [
// Use "system" role to format the prompt with the selected story line.
{ role: "system", content: `{visualInstructionsPrompt} ${selectedStoryLine}`}
];

// Use the client to get chat completions based on the prepared prompt and deployment ID.
const results = await client.getChatCompletions(deploymentId, visualInstructionsPrompt);

Setup for Dall-E to generate friendly images

export const imageGenPrompt="Create a pencil drawing for a given storyline, incorporating more descriptive pictures requires a thoughtful approach that captures the essence and nuances of the narrative"

Approach 1 :

const results = await client.getImages(deploymentName, imageGenPrompt, { n, size });

for (const image of results.data) {
console.log (`Image generation result URL: ${image.url}`);
}

Approach 2:

// Environment variable for the Azure Image Generation Service URL. Falls back to an empty string if not set.
const azureImageUrl = 'AZURE_IMAGE_URL';

// API key for authenticating requests. Replace 'apiKey' with the actual key variable or constant.
const apiKeyValue = "YOUR_API_KEY_HERE"; // Ensure to replace YOUR_API_KEY_HERE with your actual API key.

// HTTP options including headers with the API key for authentication.
const httpOptions = {
headers: new HttpHeaders({
'api-key': apiKeyValue // Use the API key variable for setting the header.
})
};

const body = {
prompt: `{imageGenPrompt}{headline}`,
size: '1024x1024',
n: 1
};
this.http.post<imageResponse>(apiUrl, body, httpOptions).subscribe(response => {
const imageID = response['id'];

We will use imageID to retry logic for final image URL with same endpoint.

// Define the storyPages interface
export interface storyPages{
story: string | null | undefined; // Story content (can be null, undefined, or a string)
imageUrl: string | undefined; // URL of the image (can be undefined or a string)
}

We can bind final storyContent list to pages component

<section class="book-container">
<section id="book-pages" class="book-pages">
<article *ngFor="let page of storyPages" class="book-page">
<section class="book-page-content">
<p class="book-page-story">{{ page.story }}</p>
</section>
<section *ngIf="page.imageUrl" class="book-page-image-container">
<img [src]="page.imageUrl" width="250" height="250" class="book-page-image" alt="Story image">
</section>
</article>
</section>
</section>

Reference/Example to Create Book Component

Pure JavaScript Book Effect (codepen.io)

DEMO

Sample Demo

Help links:

sannidhisiva/AzureOpenAiWithAngular (github.com)

Quickstart — Get started using GPT-35-Turbo and GPT-4 with Azure OpenAI Service — Azure OpenAI Service | Microsoft Learn

Home • Angular

--

--

No responses yet