![](BLOG/2025/08/attachments/how-to-create-an-ai-chat-with-gemini-in-construct-3.webp) Ever dreamed of creating NPCs that can truly converse, or game worlds that react dynamically to player questions? This guide shows you how to bring the power of a **conversational AI** like Gemini directly into your **Construct 3 projects**. A recent video inspired me to dust off this idea and create an updated guide. I had explored AI integration in the past, but technologies evolve quickly. While earlier AI tools had their limits, today's powerful models like **Gemini** have completely changed the game. The focus now is all on ChatBots like ChatGPT and Gemini; incredibly powerful tools, despite their limitations. Integrating these models into the Construct 3 canvas with JavaScript unlocks fascinating creative possibilities, from NPCs with dynamic personalities to procedurally generated quests and lore. By the end of this guide, you won't just connect to an API; you'll have built a functional chat system with memory, ready to be integrated into your own games. ### Step 1: Get Your Gemini API Key ![](BLOG/2025/08/attachments/gemini-e-construct-3-01.webp) Before you can write any code, you'll need an **API key**. This key acts as a unique identifier for your application, allowing Google to authenticate your requests and track usage. And the best part? You don't have to spend a dime to start experimenting. Google's free tier provides a generous number of requests per minute, which is more than enough for development and testing. Getting the key is simple: 1. Go to the [**Google AI Studio**](https://aistudio.google.com) website. 2. Click the **"Get API Key"** button. 3. On the next screen, click **"Create API Key"**. An alphanumeric key will be generated. **Important:** Treat this key like a password. Never share it, commit it to a public repository, or embed it directly in client-side code. For a real application, you would use a secure server-side method to store and use the key. ### Step 2: Setting Up the Construct 3 Project ![](BLOG/2025/08/attachments/gemini-e-construct-3-03.webp) Since we're using Construct 3, our application will run in a browser, so the natural choice of language is **JavaScript** and **TypeScript**. Now, let's build the basic interface in Construct 3. We'll keep it simple for this example. On your main layout, you will need to add two objects: - A **Text Input** object where the user can type their question (the "prompt"). - A **Text** object to display Gemini's response. ### Step 3: Structuring Your Code ![](BLOG/2025/08/attachments/gemini-e-construct-3-04.webp) To keep the project organized, we'll use three scripts: - **Gemini.ts**: The core logic for our AI. This file will be responsible for building requests to the Gemini API, sending them, and processing the responses. - **importsForEvents.ts**: A bridge file required by Construct. It imports our module so that its functions can be called from the event sheet scripting environment. - **main.ts**: The project's entry point, executed on startup. We'll use it to initialize global variables, like a convenient reference to Construct's `runtime` object. The setup for `importsForEvents.ts` and `main.ts` is minimal. `importsForEvents.ts` contains a single line to import our functions: ```ts import * as Gemini from "./Gemini.js"; ``` We mainly need `main.ts` to make Construct's `runtime` variable easily accessible throughout the project: ```ts declare global {     var runtime: IRuntime; } runOnStartup(async  (runtime: IRuntime) =>  {       globalThis.runtime =  runtime;     }) ``` ### Step 4: Making Your First API Call (A Stateless Request) Now, let's dive into the core logic inside our `Gemini.ts` file. To communicate with Gemini, as indicated in the [official Google documentation](https://ai.google.dev/api/generate-content#method:-models.generatecontent), we need to send a `POST` request to a specific endpoint. The process is broken down into four fundamental steps: 1. **Assemble the Request URL**: This URL tells Google's servers which AI model to use and authenticates our request with the API key. 2. **Prepare the Payload**: The 'payload' is the data we send. For Gemini, this must be a specifically structured JSON object that contains the user's prompt. 3. **Execute the request** using JavaScript's Fetch API. 4. **Parse the JSON response** to extract the text generated by Gemini. Let's translate these steps into a clean, reusable `async` function called `ask`: ```ts const modelGemini = "gemini-2.5-flash" export async function ask (obj :{key: string, question: string, runtime: IRuntime}):Promise<string> { const {key, question, runtime} = {...obj}; const url = `https://generativelanguage.googleapis.com/v1/models/${modelGemini}:generateContent?key=${key}`; // const url = `https://generativelanguage.googleapis.com/v1beta/models/${modelGemini}:generateContent?key=${key}`; const payload = { contents: [{ parts: [{ text: question }] }] }; try { const response = await fetch(url, { method: 'POST', headers: { 'Content-Type': 'application/json' }, body: JSON.stringify(payload) }); if (!response.ok) { throw new Error(`Server error: ${response.status}`); } const data = await response.json(); const answer = data.candidates[0].content.parts[0].text; return answer; } catch (error) { console.error("Error details: ", error); return `An error occurred. (${(error as Error).message})`; } } ``` ### Step 5: Connecting the Function to the UI Now that our `ask` function is written, let's wire it up to the Construct 3 interface using event sheet scripting. We'll trigger this function when the user presses the 'Enter' key in the text input field. ![](BLOG/2025/08/attachments/gemini-e-construct-3-02.webp) ```ts // TypeScript code in a Construct 3 Event Sheet const key = runtime.globalVars.API_KEY; const question = runtime?.objects?.Question?.getFirstInstance()?.text || ""; const answer = question.trim().length > 0 ? await Gemini.ask({key, question, runtime}) : "Missing Question!"; const textAnswer = runtime.objects.Answer.getFirstInstance(); if (textAnswer) { textAnswer.text = answer}; ``` ### Step 6: From Forgetful Bot to Conversational AI: Adding Memory Our chat works, but it has **the memory of a goldfish**. Each prompt is treated as a completely separate, standalone interaction, making it impossible to ask follow-up questions or build a coherent dialogue. This makes it impossible to build a coherent dialogue. To turn it into a real conversation, we need to give it a memory. The solution, as outlined in [the official documentation](https://ai.google.dev/gemini-api/docs/text-generation#multi-turn-conversations), is both elegant and simple: with each new request, we send the **entire conversation history**, specifying the `role` of each message (`user` for our prompts and `model` for Gemini's replies). Think of it like catching a friend up on a long conversation before asking your next question. Instead of treating each query as a fresh start, we provide Gemini with the full transcript, giving it the context needed to hold a meaningful, multi-turn dialogue. #### Diving into the Code First, let's modify `main.ts` to create and initialize a global variable that will hold the history. We'll take this opportunity to insert a "system prompt": an initial instruction that defines our AI's personality and behavior from the start, guiding the tone of all its future responses. ```ts declare global {     var runtime: IRuntime;     var chatHistory: {role: "user"|"model", parts: [ q: {text: string}] }[]; } runOnStartup(async  (runtime: IRuntime) =>  {       globalThis.runtime =  runtime;     globalThis.chatHistory = [                     {                         role: "user",                         parts: [{                             text: "You're a helpful and precise virtual assistant. Your responses are clear and concise."                         }]                     },                     {                         role: "model",                         parts:[{                             text: "Instructions received. I'm ready."                         }]                         }                     ];     }) ``` To implement memory, we'll evolve our `ask` function into a new, more powerful one called `chat`. This function won't just send a single question but will manage the entire conversation history. The workflow expands with a few key steps: 1. Before sending the request, add the user's question (`user`) to the history (`chatHistory`). 2. The request payload will no longer contain a single question, but the entire `chatHistory` array. 3. After receiving a valid response, add it to the history with the `model` role. 4. If the API call fails, we must remove the user's last question from the history. This is crucial for maintaining state integrity. It ensures our `chatHistory` accurately reflects the conversation the AI is aware of, preventing it from getting out of sync on the next turn. Let's see how these steps translate into the complete `chat` function: ```ts export async function chat (obj :{key: string, question: string, chatHistory: {role: string, parts: [ q: {text: string}] }[], runtime: IRuntime}):Promise<string> { const {key, question, chatHistory, runtime} = {...obj}; const url = `https://generativelanguage.googleapis.com/v1/models/${modelGemini}:generateContent?key=${key}`; // const url = `https://generativelanguage.googleapis.com/v1beta/models/${modelGemini}:generateContent?key=${key}`; // Optimistically add the user's new message to the history chatHistory.push({ role: "user", parts: [{ text: question }] }); const payload = { contents: chatHistory }; try { const response = await fetch(url, { method: 'POST', headers: { 'Content-Type': 'application/json' }, body: JSON.stringify(payload) }); if (!response.ok) // If there's an error, remove the unanswered question from history chatHistory.pop(); throw new Error(`Errore dal server: ${response.status}`); } const data = await response.json(); const answer = data.candidates[0].content.parts[0].text; // Add the model's response to the history chatHistory.push({ role: "model", parts: [{ text: answer }] }); return answer; } catch (error) { console.error("Error details: ", error); return `An error occurred. (${(error as Error).message})`; } } ``` The final step is to update the Construct 3 event sheet: simply replace the call to `Gemini.ask` with our new `Gemini.chat` function. Once that's done, our system is complete! ```ts const answer = question.trim().length > 0 ? await Gemini.chat({key, question, chatHistory, runtime}) : "Missing Question!"; ``` ### Beyond the Chatbot: Your AI-Powered Game Awaits Congratulations! You've just unlocked a powerful feature: a true conversational chat with memory, powered by Gemini, is now integrated into your Construct 3 project. magine an NPC that doesn't just repeat canned lines, but remembers your previous conversations. What if a shopkeeper could dynamically generate their inventory based on world events, or a quest-giver could create new challenges on the fly? The key to unlocking these advanced features is **[structured output](https://ai.google.dev/gemini-api/docs/structured-output)**; commanding the AI to provide responses not just as conversational text, but as formatted data (like JSON) that your game can instantly understand and act upon. We'll explore how to implement this advanced technique in the next article. Stay tuned!