The life of a prompt: Demystifying Gemini
Have you ever wondered what happens when you ask generative AI (gen AI) a question? What are the workings behind the scenes when it receives your prompt? And how does it protect the privacy and security of your data while still bringing in the most relevant information from your company’s corpus (all those files and emails you can access)?
Well, today we’re looking at the life of a prompt in Gemini for Google Workspace. When a Workspace business user submits a question to Gemini, what sequence of events do they kick off?
The four stages in the life of a prompt
Let’s say that you give Gemini the following prompt in the Google Docs side panel: “Create a summary of Q3 sales performance.” Here’s what happens next:
- Gemini only retrieves relevant content that you have access to in Workspace. This helps it understand the prompt’s meaning, and makes sure the response is helpful based on real-world facts. In this case, Gemini might look at specific emails, previous sales presentations, and related documents.
- This information and context is passed to the Gemini model. But Gemini doesn’t store it or use it to train the model. The data disappears after your Gemini session ends.
- Gemini creates a tailored response from within your trust boundary. This is the virtual barrier that controls which information you can access and share, both within and beyond your organization. Your interactions with Gemini all happen within this boundary.
- All your organization’s security, privacy, and access controls are automatically applied. This happens as soon as you insert Gemini’s response into your Google Doc. So your organization can control where your data is stored and processed, and ensure that only authorized parties can access that data.